This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2469 succeeded
Started2019-08-12 22:23
Elapsed26m38s
Revision
Buildergke-prow-ssd-pool-1a225945-9crp
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/79860c6e-ec5e-49c8-b674-60ba64347727/targets/test'}}
podb699c15d-bd4f-11e9-a0ae-ea43db2f3479
resultstorehttps://source.cloud.google.com/results/invocations/79860c6e-ec5e-49c8-b674-60ba64347727/targets/test
infra-commit93842c6af
podb699c15d-bd4f-11e9-a0ae-ea43db2f3479
repok8s.io/kubernetes
repo-commitf2c82f49e8df79bd52c7a9c74b310e2ad7805eb9
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/deployment TestDeploymentAvailableCondition 6.33s

go test -v k8s.io/kubernetes/test/integration/deployment -run TestDeploymentAvailableCondition$
=== RUN   TestDeploymentAvailableCondition
I0812 22:43:58.231506  107866 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0812 22:43:58.231524  107866 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0812 22:43:58.231535  107866 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0812 22:43:58.231544  107866 master.go:234] Using reconciler: 
I0812 22:43:58.233032  107866 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.233120  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.233128  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.233165  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.233230  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.233737  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.233780  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.233944  107866 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0812 22:43:58.233979  107866 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.234198  107866 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0812 22:43:58.234341  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.234432  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.234494  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.234574  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.234887  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.234990  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.235171  107866 store.go:1342] Monitoring events count at <storage-prefix>//events
I0812 22:43:58.235223  107866 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0812 22:43:58.235250  107866 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.235530  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.235604  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.235740  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.235841  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.236144  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.236314  107866 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0812 22:43:58.236759  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.236398  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.236450  107866 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0812 22:43:58.237139  107866 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.237336  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.237347  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.237377  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.237608  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.237909  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.238175  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.238378  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.238766  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.238869  107866 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0812 22:43:58.238890  107866 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0812 22:43:58.239239  107866 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.239364  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.239437  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.239538  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.239718  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.239789  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.240374  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.240469  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.240475  107866 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0812 22:43:58.240492  107866 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0812 22:43:58.240649  107866 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.240701  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.240708  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.240728  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.240766  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.241298  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.242126  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.242239  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.242436  107866 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0812 22:43:58.242526  107866 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0812 22:43:58.243010  107866 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.243190  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.243253  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.243340  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.243411  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.243510  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.243873  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.243974  107866 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0812 22:43:58.244090  107866 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.244125  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.244151  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.244160  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.244194  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.244244  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.244277  107866 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0812 22:43:58.244485  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.244567  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.244587  107866 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0812 22:43:58.244632  107866 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0812 22:43:58.244904  107866 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.244967  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.244978  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.245010  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.245057  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.245295  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.245433  107866 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0812 22:43:58.245560  107866 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.245663  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.245677  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.245709  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.245752  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.245785  107866 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0812 22:43:58.246063  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.247468  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.247522  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.247604  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.247642  107866 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0812 22:43:58.247785  107866 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.247816  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.247873  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.247883  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.247917  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.247979  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.248011  107866 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0812 22:43:58.248064  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.248370  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.248505  107866 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0812 22:43:58.248687  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.248732  107866 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0812 22:43:58.248879  107866 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.248951  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.248961  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.248991  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.249334  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.249563  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.249695  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.249818  107866 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0812 22:43:58.249947  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.250055  107866 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0812 22:43:58.250103  107866 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.250195  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.250237  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.250302  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.250368  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.250868  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.250993  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.251215  107866 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0812 22:43:58.251270  107866 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0812 22:43:58.251231  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.251429  107866 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.251537  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.251573  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.251823  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.252047  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.252195  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.252651  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.252815  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.252983  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.253096  107866 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0812 22:43:58.253132  107866 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.253163  107866 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0812 22:43:58.253222  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.253239  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.253266  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.253365  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.253717  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.253822  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.253838  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.253858  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.253896  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.253929  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.254197  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.254344  107866 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.254401  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.254407  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.254428  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.254468  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.254499  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.254747  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.254894  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.254939  107866 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0812 22:43:58.255068  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.254991  107866 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0812 22:43:58.255742  107866 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.256067  107866 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.256145  107866 watch_cache.go:405] Replace watchCache (rev: 21304) 
I0812 22:43:58.256894  107866 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.257894  107866 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.259351  107866 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.260908  107866 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.261348  107866 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.261463  107866 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.261670  107866 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.262209  107866 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.262948  107866 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.263150  107866 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.263981  107866 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.264239  107866 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.264894  107866 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.265114  107866 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.265931  107866 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.266147  107866 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.266274  107866 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.266382  107866 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.266529  107866 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.266711  107866 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.266919  107866 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.267908  107866 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.268251  107866 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.269144  107866 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.269944  107866 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.270193  107866 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.270683  107866 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.271331  107866 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.271750  107866 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.272377  107866 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.273205  107866 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.273765  107866 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.274598  107866 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.274929  107866 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.275037  107866 master.go:418] Skipping disabled API group "auditregistration.k8s.io".
I0812 22:43:58.275107  107866 master.go:426] Enabling API group "authentication.k8s.io".
I0812 22:43:58.275127  107866 master.go:426] Enabling API group "authorization.k8s.io".
I0812 22:43:58.275424  107866 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.275544  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.275560  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.275606  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.275690  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.276085  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.276238  107866 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0812 22:43:58.276390  107866 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.276458  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.276469  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.276508  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.276589  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.276684  107866 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0812 22:43:58.276872  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.277225  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.277267  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.277364  107866 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0812 22:43:58.277395  107866 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0812 22:43:58.277599  107866 watch_cache.go:405] Replace watchCache (rev: 21305) 
I0812 22:43:58.277732  107866 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.277818  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.277828  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.277862  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.277932  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.278398  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.278467  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.278511  107866 watch_cache.go:405] Replace watchCache (rev: 21305) 
I0812 22:43:58.278533  107866 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0812 22:43:58.278552  107866 master.go:426] Enabling API group "autoscaling".
I0812 22:43:58.278556  107866 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0812 22:43:58.278726  107866 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.278842  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.278856  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.278892  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.279150  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.279241  107866 watch_cache.go:405] Replace watchCache (rev: 21305) 
I0812 22:43:58.279471  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.279488  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.279590  107866 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0812 22:43:58.279727  107866 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.279788  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.279795  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.279820  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.279871  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.279874  107866 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0812 22:43:58.280090  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.280233  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.280341  107866 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0812 22:43:58.280360  107866 master.go:426] Enabling API group "batch".
I0812 22:43:58.280465  107866 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.280512  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.280522  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.280553  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.280675  107866 watch_cache.go:405] Replace watchCache (rev: 21305) 
I0812 22:43:58.280863  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.280719  107866 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0812 22:43:58.281054  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.281145  107866 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0812 22:43:58.281163  107866 master.go:426] Enabling API group "certificates.k8s.io".
I0812 22:43:58.281221  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.281283  107866 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.281333  107866 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0812 22:43:58.281413  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.281427  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.281510  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.281570  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.281654  107866 watch_cache.go:405] Replace watchCache (rev: 21305) 
I0812 22:43:58.281980  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.282033  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.282154  107866 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0812 22:43:58.282184  107866 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0812 22:43:58.282360  107866 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.282451  107866 watch_cache.go:405] Replace watchCache (rev: 21305) 
I0812 22:43:58.282461  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.282496  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.282526  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.282844  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.283293  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.283476  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.283723  107866 watch_cache.go:405] Replace watchCache (rev: 21305) 
I0812 22:43:58.283989  107866 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0812 22:43:58.284014  107866 master.go:426] Enabling API group "coordination.k8s.io".
I0812 22:43:58.284044  107866 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0812 22:43:58.284172  107866 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.284236  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.284245  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.284267  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.284366  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.284702  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.284737  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.284828  107866 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0812 22:43:58.284851  107866 master.go:426] Enabling API group "extensions".
I0812 22:43:58.284925  107866 watch_cache.go:405] Replace watchCache (rev: 21305) 
I0812 22:43:58.284987  107866 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.285056  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.285067  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.285099  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.285158  107866 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0812 22:43:58.285309  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.286345  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.286516  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.286560  107866 watch_cache.go:405] Replace watchCache (rev: 21305) 
I0812 22:43:58.287354  107866 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0812 22:43:58.287514  107866 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0812 22:43:58.287881  107866 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.287981  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.287994  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.288032  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.288774  107866 watch_cache.go:405] Replace watchCache (rev: 21305) 
I0812 22:43:58.289042  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.289982  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.290133  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.290321  107866 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0812 22:43:58.290410  107866 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0812 22:43:58.291412  107866 watch_cache.go:405] Replace watchCache (rev: 21305) 
I0812 22:43:58.291653  107866 master.go:426] Enabling API group "networking.k8s.io".
I0812 22:43:58.291985  107866 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.292084  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.292097  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.292133  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.292194  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.292520  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.292646  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.293409  107866 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0812 22:43:58.293432  107866 master.go:426] Enabling API group "node.k8s.io".
I0812 22:43:58.293516  107866 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0812 22:43:58.293583  107866 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.293689  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.293701  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.293732  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.293777  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.294074  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.294190  107866 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0812 22:43:58.294198  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.294235  107866 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0812 22:43:58.294336  107866 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.294402  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.294412  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.294439  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.294475  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.294772  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.294967  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.295155  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.295976  107866 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0812 22:43:58.296002  107866 master.go:426] Enabling API group "policy".
I0812 22:43:58.296010  107866 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0812 22:43:58.296036  107866 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.296356  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.296603  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.297124  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.297144  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.297183  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.297230  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.297742  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.297831  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.297966  107866 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0812 22:43:58.298048  107866 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0812 22:43:58.298131  107866 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.298196  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.298206  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.298236  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.298275  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.298553  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.298783  107866 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0812 22:43:58.298818  107866 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.298898  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.298901  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.298941  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.298942  107866 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0812 22:43:58.298947  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.298976  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.299072  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.299246  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.299331  107866 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0812 22:43:58.299431  107866 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.299475  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.299481  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.299501  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.299507  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.299536  107866 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0812 22:43:58.299549  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.299785  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.299873  107866 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0812 22:43:58.299916  107866 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.299966  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.299976  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.300004  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.300041  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.300080  107866 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0812 22:43:58.300099  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.300393  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.300489  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.300737  107866 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0812 22:43:58.300892  107866 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.300908  107866 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0812 22:43:58.300960  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.300970  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.300996  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.301095  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.301103  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.301397  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.301492  107866 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0812 22:43:58.301517  107866 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.301573  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.301582  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.301641  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.301672  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.301699  107866 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0812 22:43:58.301885  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.302133  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.302148  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.302177  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.302204  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.302242  107866 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0812 22:43:58.302371  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.302375  107866 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.302440  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.302449  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.302500  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.302535  107866 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0812 22:43:58.302727  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.303189  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.303230  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.303304  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.303306  107866 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0812 22:43:58.303338  107866 master.go:426] Enabling API group "rbac.authorization.k8s.io".
I0812 22:43:58.303405  107866 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0812 22:43:58.305056  107866 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.305134  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.305145  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.305175  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.305218  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.305410  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.305467  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.305565  107866 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0812 22:43:58.305674  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.305717  107866 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0812 22:43:58.305718  107866 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.305802  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.305812  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.305838  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.305930  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.306117  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.306244  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.306311  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.306345  107866 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0812 22:43:58.306359  107866 master.go:426] Enabling API group "scheduling.k8s.io".
I0812 22:43:58.306419  107866 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0812 22:43:58.306486  107866 master.go:418] Skipping disabled API group "settings.k8s.io".
I0812 22:43:58.306644  107866 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.306711  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.306721  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.306750  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.306795  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.307069  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.307100  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.307105  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.307177  107866 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0812 22:43:58.307214  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.307292  107866 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.307306  107866 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0812 22:43:58.307358  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.307368  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.307392  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.307526  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.308130  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.308192  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.308238  107866 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0812 22:43:58.308269  107866 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.308286  107866 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0812 22:43:58.308400  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.308412  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.308446  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.308485  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.309130  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.309167  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.309326  107866 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0812 22:43:58.309361  107866 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0812 22:43:58.309357  107866 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.309452  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.309463  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.309530  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.309603  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.309879  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.309969  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.310074  107866 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0812 22:43:58.310194  107866 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.310243  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.310250  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.310270  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.310292  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.310305  107866 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0812 22:43:58.310406  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.311090  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.311204  107866 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0812 22:43:58.311381  107866 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.311406  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.311442  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.311453  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.311507  107866 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0812 22:43:58.311509  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.311561  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.311679  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.311735  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.312424  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.313563  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.313694  107866 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0812 22:43:58.313716  107866 master.go:426] Enabling API group "storage.k8s.io".
I0812 22:43:58.313883  107866 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.313944  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.313954  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.313984  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.314026  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.314058  107866 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0812 22:43:58.314318  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.314653  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.314678  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.314808  107866 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0812 22:43:58.314874  107866 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0812 22:43:58.315011  107866 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.315090  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.315100  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.315161  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.315211  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.316178  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.316340  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.316691  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.316732  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.317020  107866 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0812 22:43:58.317072  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.317129  107866 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0812 22:43:58.317181  107866 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.317253  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.317264  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.317331  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.317392  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.317739  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.317915  107866 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0812 22:43:58.318033  107866 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.318158  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.318169  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.318201  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.318308  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.318340  107866 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0812 22:43:58.318501  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.318990  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.319032  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.319128  107866 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0812 22:43:58.319164  107866 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0812 22:43:58.319170  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.319437  107866 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.319502  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.319511  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.319537  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.319602  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.319782  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.319894  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.319967  107866 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0812 22:43:58.320004  107866 master.go:426] Enabling API group "apps".
I0812 22:43:58.320041  107866 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.320092  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.320101  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.320149  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.320181  107866 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0812 22:43:58.320201  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.320261  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.320523  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.320580  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.320720  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.320790  107866 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0812 22:43:58.320818  107866 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.320997  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.321015  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.321037  107866 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0812 22:43:58.321052  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.321129  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.321520  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.321659  107866 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0812 22:43:58.321691  107866 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.321769  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.321779  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.321807  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.321839  107866 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0812 22:43:58.321921  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.322078  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.322106  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.322196  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.322458  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.322603  107866 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0812 22:43:58.322659  107866 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.322720  107866 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0812 22:43:58.322752  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.322762  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.322788  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.322682  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.322927  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.322958  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.323426  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.323500  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.323529  107866 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0812 22:43:58.323542  107866 master.go:426] Enabling API group "admissionregistration.k8s.io".
I0812 22:43:58.323592  107866 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.323655  107866 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0812 22:43:58.323818  107866 client.go:354] parsed scheme: ""
I0812 22:43:58.323829  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:58.323858  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:58.323864  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.323989  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.324358  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:58.324467  107866 store.go:1342] Monitoring events count at <storage-prefix>//events
I0812 22:43:58.324480  107866 master.go:426] Enabling API group "events.k8s.io".
I0812 22:43:58.324754  107866 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.324992  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:58.324987  107866 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.325106  107866 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0812 22:43:58.325283  107866 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.325382  107866 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.325496  107866 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.325584  107866 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.325808  107866 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.325938  107866 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.325848  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.326036  107866 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.326147  107866 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.327225  107866 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.327524  107866 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.328220  107866 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.328433  107866 watch_cache.go:405] Replace watchCache (rev: 21306) 
I0812 22:43:58.328557  107866 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.329292  107866 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.329542  107866 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.330181  107866 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.330429  107866 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.331392  107866 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.331681  107866 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0812 22:43:58.331738  107866 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0812 22:43:58.332275  107866 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.332428  107866 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.332639  107866 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.333305  107866 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.334006  107866 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.334906  107866 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.335222  107866 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.336269  107866 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.337125  107866 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.337353  107866 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.337952  107866 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0812 22:43:58.338023  107866 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0812 22:43:58.338717  107866 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.338980  107866 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.339423  107866 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.340007  107866 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.340398  107866 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.352122  107866 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.364198  107866 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.374083  107866 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.375794  107866 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.378446  107866 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.380913  107866 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0812 22:43:58.381040  107866 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0812 22:43:58.382695  107866 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.384797  107866 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0812 22:43:58.384914  107866 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0812 22:43:58.386487  107866 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.388127  107866 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.388737  107866 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.390219  107866 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.392292  107866 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.393304  107866 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.394819  107866 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0812 22:43:58.394934  107866 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0812 22:43:58.396760  107866 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.398681  107866 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.399222  107866 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.401519  107866 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.402009  107866 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.402588  107866 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.404241  107866 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.405024  107866 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.405741  107866 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.409045  107866 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.409584  107866 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.410077  107866 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0812 22:43:58.410226  107866 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0812 22:43:58.410244  107866 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0812 22:43:58.412878  107866 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.415146  107866 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.417255  107866 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.419306  107866 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.421238  107866 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"218e4023-fe41-4af1-b29c-74ca75e0ef00", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 22:43:58.438495  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:58.439247  107866 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0812 22:43:58.439296  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:58.439314  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:58.439328  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:58.439339  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:58.439404  107866 httplog.go:90] GET /healthz: (1.217192ms) 0 [Go-http-client/1.1 127.0.0.1:56600]
I0812 22:43:58.445684  107866 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (7.301781ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:58.449855  107866 httplog.go:90] GET /api/v1/services: (1.678238ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:58.455022  107866 httplog.go:90] GET /api/v1/services: (1.059968ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:58.459087  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:58.459121  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:58.459137  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:58.459147  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:58.459155  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:58.459183  107866 httplog.go:90] GET /healthz: (186.066µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:58.460518  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.501475ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56600]
I0812 22:43:58.462348  107866 httplog.go:90] GET /api/v1/services: (1.276106ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:58.462778  107866 httplog.go:90] POST /api/v1/namespaces: (1.579463ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56600]
I0812 22:43:58.462805  107866 httplog.go:90] GET /api/v1/services: (1.078245ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:58.466661  107866 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.293871ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56600]
I0812 22:43:58.469047  107866 httplog.go:90] POST /api/v1/namespaces: (1.928245ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:58.470548  107866 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.067391ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:58.473529  107866 httplog.go:90] POST /api/v1/namespaces: (1.94287ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:58.544992  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:58.545035  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:58.545049  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:58.545059  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:58.545066  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:58.545100  107866 httplog.go:90] GET /healthz: (261.967µs) 0 [Go-http-client/1.1 127.0.0.1:56604]
I0812 22:43:58.560261  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:58.560301  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:58.560317  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:58.560327  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:58.560337  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:58.560367  107866 httplog.go:90] GET /healthz: (259.767µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:58.641284  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:58.641322  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:58.641337  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:58.641347  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:58.641355  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:58.641391  107866 httplog.go:90] GET /healthz: (253.944µs) 0 [Go-http-client/1.1 127.0.0.1:56604]
I0812 22:43:58.660419  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:58.660450  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:58.660463  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:58.660473  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:58.660481  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:58.660514  107866 httplog.go:90] GET /healthz: (244.306µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:58.740480  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:58.740514  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:58.740528  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:58.740538  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:58.740546  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:58.740576  107866 httplog.go:90] GET /healthz: (262.828µs) 0 [Go-http-client/1.1 127.0.0.1:56604]
I0812 22:43:58.760317  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:58.760354  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:58.760368  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:58.760379  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:58.760387  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:58.760427  107866 httplog.go:90] GET /healthz: (255.958µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:58.840539  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:58.840575  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:58.840588  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:58.840598  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:58.840606  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:58.840685  107866 httplog.go:90] GET /healthz: (298.566µs) 0 [Go-http-client/1.1 127.0.0.1:56604]
I0812 22:43:58.860355  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:58.860399  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:58.860414  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:58.860427  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:58.860435  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:58.860470  107866 httplog.go:90] GET /healthz: (281.355µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:58.940605  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:58.940657  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:58.940671  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:58.940682  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:58.940690  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:58.940725  107866 httplog.go:90] GET /healthz: (282.342µs) 0 [Go-http-client/1.1 127.0.0.1:56604]
I0812 22:43:58.960446  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:58.960503  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:58.960520  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:58.960530  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:58.960538  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:58.960575  107866 httplog.go:90] GET /healthz: (300.08µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:59.040798  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:59.040843  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.040857  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:59.040868  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:59.040880  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:59.040913  107866 httplog.go:90] GET /healthz: (295.404µs) 0 [Go-http-client/1.1 127.0.0.1:56604]
I0812 22:43:59.060400  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:59.060452  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.060463  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:59.060470  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:59.060476  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:59.060504  107866 httplog.go:90] GET /healthz: (290.403µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:59.140518  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:59.140558  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.140574  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:59.140584  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:59.140593  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:59.140644  107866 httplog.go:90] GET /healthz: (273.454µs) 0 [Go-http-client/1.1 127.0.0.1:56604]
I0812 22:43:59.160364  107866 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 22:43:59.160410  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.160423  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:59.160432  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:59.160441  107866 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:59.160478  107866 httplog.go:90] GET /healthz: (267.911µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:59.231513  107866 client.go:354] parsed scheme: ""
I0812 22:43:59.231550  107866 client.go:354] scheme "" not registered, fallback to default scheme
I0812 22:43:59.231604  107866 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 22:43:59.231714  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:59.232225  107866 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 22:43:59.232309  107866 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 22:43:59.241626  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.241668  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:59.241679  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:59.241689  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:59.241731  107866 httplog.go:90] GET /healthz: (1.352097ms) 0 [Go-http-client/1.1 127.0.0.1:56604]
I0812 22:43:59.262428  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.262465  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:59.262477  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:59.262488  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:59.262531  107866 httplog.go:90] GET /healthz: (1.28198ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:59.341910  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.341988  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:59.342000  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:59.342006  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:59.342054  107866 httplog.go:90] GET /healthz: (1.701075ms) 0 [Go-http-client/1.1 127.0.0.1:56604]
I0812 22:43:59.361348  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.361381  107866 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 22:43:59.361388  107866 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 22:43:59.361395  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 22:43:59.361441  107866 httplog.go:90] GET /healthz: (1.253803ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:59.427880  107866 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.746982ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.428036  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.196184ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.428309  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.398421ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:59.430063  107866 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.770055ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.430794  107866 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (2.012789ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.430798  107866 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0812 22:43:59.431058  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.292285ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:59.431997  107866 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.013745ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.432860  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.421988ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:59.433083  107866 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.906449ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.433817  107866 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.260037ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.433961  107866 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0812 22:43:59.433974  107866 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0812 22:43:59.434315  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.098219ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0812 22:43:59.435842  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (982.463µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.437352  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.179836ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.438437  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (728.487µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.439879  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (934.797µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.441000  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.441030  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:43:59.441062  107866 httplog.go:90] GET /healthz: (892.676µs) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:43:59.441065  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (874.251µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.442336  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (841.146µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.444512  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.767178ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.444742  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0812 22:43:59.445981  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (988.168µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.448154  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.751862ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.448347  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0812 22:43:59.449469  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (950.162µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.451544  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.506646ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.451884  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0812 22:43:59.453031  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (974.498µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.454919  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.501596ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.455067  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0812 22:43:59.456128  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (880.158µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.458392  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.919774ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.458585  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0812 22:43:59.459830  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (985.763µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.460962  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.461014  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:43:59.461050  107866 httplog.go:90] GET /healthz: (1.043244ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.461759  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.620258ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.461978  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0812 22:43:59.463016  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (913.286µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.464682  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.240489ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.464832  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0812 22:43:59.465952  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (925.986µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.467383  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.167789ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.467599  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0812 22:43:59.468840  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (954.157µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.470861  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.566387ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.471243  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0812 22:43:59.472489  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (926.705µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.474828  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.897992ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.475056  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0812 22:43:59.476117  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (891.172µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.478175  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.67531ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.478376  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0812 22:43:59.479464  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (902.643µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.495741  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (15.682518ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.496076  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0812 22:43:59.498478  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.581979ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.500643  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.681001ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.501156  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0812 22:43:59.503260  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.489514ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.505414  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.522446ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.505808  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0812 22:43:59.507216  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (990.578µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.509179  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.503837ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.509368  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0812 22:43:59.510368  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (792.565µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.512912  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.891078ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.513403  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0812 22:43:59.515059  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.303387ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.518327  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.361231ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.519328  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0812 22:43:59.522404  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (2.80597ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.525671  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.592708ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.525937  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0812 22:43:59.530898  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (4.797344ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.532967  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.549508ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.533225  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0812 22:43:59.534452  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (986.563µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.536823  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.738146ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.537883  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0812 22:43:59.539704  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.481353ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.542021  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.542054  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:43:59.542090  107866 httplog.go:90] GET /healthz: (1.613901ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:43:59.542499  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.701556ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.542783  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0812 22:43:59.544161  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.174496ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.546480  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.927778ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.546801  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0812 22:43:59.547983  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (960.899µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.550274  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.718921ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.550687  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0812 22:43:59.552158  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.255811ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.555778  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.032079ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.557011  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0812 22:43:59.558197  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (991.622µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.560247  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.567371ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.560544  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0812 22:43:59.561659  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.561687  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:43:59.561723  107866 httplog.go:90] GET /healthz: (1.656682ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.562204  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.419173ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.564917  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.096681ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.565166  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0812 22:43:59.566566  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.185632ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.569992  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.757579ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.570592  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0812 22:43:59.572146  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.334545ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.575278  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.260406ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.575647  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0812 22:43:59.577760  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.936402ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.580806  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.33278ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.581091  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0812 22:43:59.582469  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.1025ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.584398  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.490888ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.584791  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0812 22:43:59.586300  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.314582ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.588355  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.639318ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.588742  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0812 22:43:59.590700  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.684131ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.593417  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.286316ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.593702  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0812 22:43:59.596226  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (2.305955ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.600751  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.726827ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.601066  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0812 22:43:59.603085  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.548456ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.605282  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.40496ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.605853  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0812 22:43:59.607191  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.001312ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.610176  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.423252ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.610552  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0812 22:43:59.612474  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.68049ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.614679  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.6096ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.614926  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0812 22:43:59.616396  107866 cacher.go:763] cacher (*rbac.ClusterRole): 1 objects queued in incoming channel.
I0812 22:43:59.617080  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.957386ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.622826  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.331836ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.623128  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0812 22:43:59.624687  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.234375ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.626862  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.68845ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.627114  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0812 22:43:59.628259  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (890.121µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.630952  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.919471ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.631406  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0812 22:43:59.632743  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.071088ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.634969  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.731068ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.635381  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0812 22:43:59.637738  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (2.090513ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.640588  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.37752ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.640869  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0812 22:43:59.641431  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.641459  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:43:59.641484  107866 httplog.go:90] GET /healthz: (1.134322ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:43:59.642097  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (952.189µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.643914  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.357833ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.644548  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0812 22:43:59.645740  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (921.747µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.647764  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.566809ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.647996  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0812 22:43:59.649285  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.019565ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.651559  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.508731ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.651829  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0812 22:43:59.652927  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (898.932µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.654857  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.579247ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.655074  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0812 22:43:59.656305  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (879.942µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.658257  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.538295ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.658660  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0812 22:43:59.659837  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (922.301µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.660876  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.660903  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:43:59.660927  107866 httplog.go:90] GET /healthz: (826.484µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0812 22:43:59.661712  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.482363ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.662056  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0812 22:43:59.663222  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (859.63µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.665302  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.597154ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.665607  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0812 22:43:59.667024  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.124898ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.668895  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.396431ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.669111  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0812 22:43:59.670439  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.003636ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.672826  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.743701ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.673119  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0812 22:43:59.674415  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.044808ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.676867  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.92805ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.677065  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0812 22:43:59.678659  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.266369ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.688852  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.745758ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.689147  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0812 22:43:59.707410  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.399692ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.727836  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.733544ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.728143  107866 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0812 22:43:59.741447  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.741680  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:43:59.741956  107866 httplog.go:90] GET /healthz: (1.616847ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:43:59.747933  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.956099ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.761355  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.761555  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:43:59.761765  107866 httplog.go:90] GET /healthz: (1.547415ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.768334  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.391958ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.768757  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0812 22:43:59.787527  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.395521ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.808069  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.023822ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.808441  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0812 22:43:59.827516  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.341918ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.841966  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.842001  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:43:59.842078  107866 httplog.go:90] GET /healthz: (1.609653ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:43:59.848364  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.179476ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.848809  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0812 22:43:59.861240  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.861392  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:43:59.861501  107866 httplog.go:90] GET /healthz: (1.302529ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.867710  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.661095ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.898205  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.628349ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.898532  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0812 22:43:59.907479  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.560135ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.928166  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.224426ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.928446  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0812 22:43:59.941436  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.941477  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:43:59.941522  107866 httplog.go:90] GET /healthz: (1.138566ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:43:59.947256  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.301719ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.961422  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:43:59.961458  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:43:59.961499  107866 httplog.go:90] GET /healthz: (1.346816ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.968381  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.363478ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:43:59.968603  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0812 22:43:59.988769  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (2.773581ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.008830  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.686662ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.009101  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0812 22:44:00.027532  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.536588ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.041502  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.041539  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.041588  107866 httplog.go:90] GET /healthz: (1.133877ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:00.048056  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.066144ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.048363  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0812 22:44:00.061451  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.061485  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.061526  107866 httplog.go:90] GET /healthz: (1.220118ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.067840  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.865935ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.087953  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.922985ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.088255  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0812 22:44:00.107379  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.413659ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.128210  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.156531ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.128777  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0812 22:44:00.141233  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.141272  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.141316  107866 httplog.go:90] GET /healthz: (1.042274ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:00.147326  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.376204ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.162027  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.162061  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.162095  107866 httplog.go:90] GET /healthz: (1.621311ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.168269  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.288926ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.168510  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0812 22:44:00.187875  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.737559ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.208339  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.29611ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.208748  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0812 22:44:00.227699  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.648159ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.242081  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.242117  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.242152  107866 httplog.go:90] GET /healthz: (1.299376ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:00.248364  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.365075ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.248868  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0812 22:44:00.261209  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.261247  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.261376  107866 httplog.go:90] GET /healthz: (1.214059ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.267688  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.669269ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.287958  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.96447ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.288242  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0812 22:44:00.307656  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.394131ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.328462  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.424557ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.328756  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0812 22:44:00.341465  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.341495  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.341606  107866 httplog.go:90] GET /healthz: (1.083416ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:00.347919  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.813106ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.361431  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.361470  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.361512  107866 httplog.go:90] GET /healthz: (1.285559ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.368338  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.34601ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.368577  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0812 22:44:00.387598  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.4216ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.408760  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.678364ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.409011  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0812 22:44:00.427920  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.621036ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.441566  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.441609  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.441694  107866 httplog.go:90] GET /healthz: (1.324324ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:00.448382  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.261523ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.448947  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0812 22:44:00.461227  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.461263  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.461332  107866 httplog.go:90] GET /healthz: (1.092583ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.467656  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.591397ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.488292  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.17523ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.488560  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0812 22:44:00.507689  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.508079ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.528054  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.977942ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.528587  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0812 22:44:00.541664  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.541713  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.541789  107866 httplog.go:90] GET /healthz: (1.447937ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:00.547509  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.536034ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.561253  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.561292  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.561341  107866 httplog.go:90] GET /healthz: (1.197437ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.568287  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.319216ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.568541  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0812 22:44:00.587713  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.632916ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.609188  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.46713ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.609468  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0812 22:44:00.627236  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.300516ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.641435  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.641630  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.641897  107866 httplog.go:90] GET /healthz: (1.523666ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:00.648518  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.395818ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.649146  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0812 22:44:00.661371  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.661402  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.661439  107866 httplog.go:90] GET /healthz: (1.176354ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.667527  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.553208ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.688229  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.219992ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.688529  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0812 22:44:00.708357  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (2.345884ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.728107  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.064322ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.728533  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0812 22:44:00.741790  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.741831  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.741884  107866 httplog.go:90] GET /healthz: (1.49486ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:00.747365  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.403241ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.761235  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.761346  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.761411  107866 httplog.go:90] GET /healthz: (1.268783ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.768477  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.412708ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.768805  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0812 22:44:00.787379  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.387967ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.808487  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.477292ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.809039  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0812 22:44:00.827430  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.389702ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.841780  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.841809  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.841841  107866 httplog.go:90] GET /healthz: (1.177151ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:00.848201  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.235ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.848453  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0812 22:44:00.861282  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.861321  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.861364  107866 httplog.go:90] GET /healthz: (1.162071ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.868287  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.342387ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.888061  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.052148ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.888273  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0812 22:44:00.907514  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.506725ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.928235  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.286986ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.928716  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0812 22:44:00.941516  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.941553  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.941605  107866 httplog.go:90] GET /healthz: (1.197187ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:00.947226  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.258178ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.961396  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:00.961430  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:00.961482  107866 httplog.go:90] GET /healthz: (1.184208ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.967947  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.953139ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:00.968252  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0812 22:44:00.987648  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.59888ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.008156  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.168302ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.008407  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0812 22:44:01.027379  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.386787ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.043005  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.043053  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.043109  107866 httplog.go:90] GET /healthz: (2.405947ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:01.047937  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.974446ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.048202  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0812 22:44:01.061453  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.061494  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.061540  107866 httplog.go:90] GET /healthz: (1.271603ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.067351  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.404514ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.088164  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.156832ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.088553  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0812 22:44:01.107523  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.491967ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.128813  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.708979ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.129306  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0812 22:44:01.141514  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.141549  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.141585  107866 httplog.go:90] GET /healthz: (1.230536ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:01.147139  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.233904ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.161432  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.161481  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.161546  107866 httplog.go:90] GET /healthz: (1.329404ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.168225  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.237541ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.168484  107866 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0812 22:44:01.187684  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.65104ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.189742  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.399361ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.208485  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.437084ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.208755  107866 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0812 22:44:01.227515  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.540611ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.229603  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.528312ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.241855  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.241897  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.241940  107866 httplog.go:90] GET /healthz: (1.56518ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:01.248573  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.565828ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.248874  107866 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0812 22:44:01.264910  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.264942  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.264977  107866 httplog.go:90] GET /healthz: (4.781409ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.266719  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (921.748µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.268509  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.405656ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.288472  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.395156ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.288926  107866 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0812 22:44:01.307518  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.443838ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.309703  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.513088ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.328299  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.191126ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.328600  107866 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0812 22:44:01.341640  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.341830  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.341986  107866 httplog.go:90] GET /healthz: (1.591322ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:01.347460  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.52583ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.349425  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.381185ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.361454  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.361497  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.361538  107866 httplog.go:90] GET /healthz: (1.358916ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.368266  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.321577ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.368684  107866 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0812 22:44:01.387655  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.411221ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.389971  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.5032ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.408593  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.635912ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.408955  107866 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0812 22:44:01.427409  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.424594ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.434647  107866 httplog.go:90] GET /api/v1/namespaces/kube-public: (6.487554ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.441695  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.441977  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.442220  107866 httplog.go:90] GET /healthz: (1.790461ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:01.447898  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.868296ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.448556  107866 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0812 22:44:01.461542  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.461582  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.461676  107866 httplog.go:90] GET /healthz: (1.3302ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.467436  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.479656ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.469466  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.321813ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.489162  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.67476ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.489762  107866 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0812 22:44:01.507340  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.17798ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.509237  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.301838ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.528406  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.409559ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.528756  107866 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0812 22:44:01.541795  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.541828  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.541871  107866 httplog.go:90] GET /healthz: (1.301883ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:01.547359  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.438805ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.550170  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.298759ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.561061  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.561093  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.561138  107866 httplog.go:90] GET /healthz: (1.032447ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.568258  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.890324ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.568553  107866 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0812 22:44:01.587364  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.180812ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.589730  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.702781ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.609078  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.083018ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.609403  107866 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0812 22:44:01.627567  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.584238ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.629392  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.304743ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.641530  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.641562  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.641605  107866 httplog.go:90] GET /healthz: (1.290218ms) 0 [Go-http-client/1.1 127.0.0.1:56632]
I0812 22:44:01.647691  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.775639ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.647991  107866 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0812 22:44:01.661655  107866 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 22:44:01.661693  107866 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 22:44:01.661754  107866 httplog.go:90] GET /healthz: (1.531864ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.667471  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.498789ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.669446  107866 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.595061ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.688217  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.177133ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.688815  107866 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0812 22:44:01.707572  107866 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.600382ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.709668  107866 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.385038ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.728456  107866 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.39946ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.729231  107866 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0812 22:44:01.741536  107866 httplog.go:90] GET /healthz: (1.216384ms) 200 [Go-http-client/1.1 127.0.0.1:56632]
W0812 22:44:01.743410  107866 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 22:44:01.743463  107866 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 22:44:01.743490  107866 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0812 22:44:01.746855  107866 httplog.go:90] POST /apis/apps/v1/namespaces/test-deployment-available-condition/deployments: (2.30885ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.747376  107866 deployment_controller.go:152] Starting deployment controller
I0812 22:44:01.747420  107866 controller_utils.go:1029] Waiting for caches to sync for deployment controller
I0812 22:44:01.747444  107866 replica_set.go:182] Starting replicaset controller
I0812 22:44:01.747458  107866 controller_utils.go:1029] Waiting for caches to sync for ReplicaSet controller
I0812 22:44:01.747502  107866 reflector.go:122] Starting reflector *v1.Pod (12h0m0s) from k8s.io/client-go/informers/factory.go:133
I0812 22:44:01.747527  107866 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0812 22:44:01.747567  107866 reflector.go:122] Starting reflector *v1.ReplicaSet (12h0m0s) from k8s.io/client-go/informers/factory.go:133
I0812 22:44:01.747585  107866 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0812 22:44:01.748029  107866 reflector.go:122] Starting reflector *v1.Deployment (12h0m0s) from k8s.io/client-go/informers/factory.go:133
I0812 22:44:01.748057  107866 reflector.go:160] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:133
I0812 22:44:01.748411  107866 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (638.275µs) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:56602]
I0812 22:44:01.748804  107866 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (648.569µs) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:56802]
I0812 22:44:01.749258  107866 get.go:250] Starting watch for /api/v1/pods, rv=21304 labels= fields= timeout=7m59s
I0812 22:44:01.749413  107866 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=21306 labels= fields= timeout=5m14s
I0812 22:44:01.750075  107866 httplog.go:90] GET /apis/apps/v1/deployments?limit=500&resourceVersion=0: (1.50273ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:56804]
I0812 22:44:01.750285  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.727353ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.750823  107866 deployment_controller.go:168] Adding deployment deployment
I0812 22:44:01.751173  107866 get.go:250] Starting watch for /apis/apps/v1/deployments, rv=21643 labels= fields= timeout=6m35s
I0812 22:44:01.761782  107866 httplog.go:90] GET /healthz: (1.380585ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.763622  107866 httplog.go:90] GET /api/v1/namespaces/default: (1.175128ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.765784  107866 httplog.go:90] POST /api/v1/namespaces: (1.680873ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.767411  107866 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.247531ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.771758  107866 httplog.go:90] POST /api/v1/namespaces/default/services: (3.763974ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.773581  107866 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.421786ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.776654  107866 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.507465ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0812 22:44:01.847643  107866 shared_informer.go:211] caches populated
I0812 22:44:01.847670  107866 shared_informer.go:211] caches populated
I0812 22:44:01.847695  107866 controller_utils.go:1036] Caches are synced for ReplicaSet controller
I0812 22:44:01.847679  107866 controller_utils.go:1036] Caches are synced for deployment controller
I0812 22:44:01.847783  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:01.847739339 +0000 UTC m=+159.195784354)
I0812 22:44:01.848136  107866 deployment_util.go:259] Updating replica set "deployment-cddb65674" revision to 1
I0812 22:44:01.851324  107866 httplog.go:90] POST /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets: (2.700964ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:56632]
I0812 22:44:01.851429  107866 controller_utils.go:202] Controller test-deployment-available-condition/deployment-cddb65674 either never recorded expectations, or the ttl expired.
I0812 22:44:01.851450  107866 deployment_controller.go:214] ReplicaSet deployment-cddb65674 added.
I0812 22:44:01.851465  107866 controller_utils.go:219] Setting expectations &controller.ControlleeExpectations{add:10, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:01.851515  107866 replica_set.go:477] Too few replicas for ReplicaSet test-deployment-available-condition/deployment-cddb65674, need 10, creating 10
I0812 22:44:01.852078  107866 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"test-deployment-available-condition", Name:"deployment", UID:"ee055131-745e-472c-bbd3-689aaa864823", APIVersion:"apps/v1", ResourceVersion:"21643", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-cddb65674 to 10
I0812 22:44:01.852863  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.573729ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0812 22:44:01.854661  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.430879ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:56808]
I0812 22:44:01.855049  107866 deployment_controller.go:175] Updating deployment deployment
I0812 22:44:01.855074  107866 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-08-12 22:44:01.851937794 +0000 UTC m=+159.199982832 - now: 2019-08-12 22:44:01.855065966 +0000 UTC m=+159.203111005]
I0812 22:44:01.855322  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.657065ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:56810]
I0812 22:44:01.856475  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (4.504814ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56632]
I0812 22:44:01.857080  107866 controller_utils.go:589] Controller deployment-cddb65674 created pod deployment-cddb65674-9dpjk
I0812 22:44:01.857305  107866 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", APIVersion:"apps/v1", ResourceVersion:"21648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-cddb65674-9dpjk
I0812 22:44:01.857091  107866 replica_set.go:275] Pod deployment-cddb65674-9dpjk created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-cddb65674-9dpjk", GenerateName:"deployment-cddb65674-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-9dpjk", UID:"03582a8d-5b72-408e-9efe-16d5e1e1a862", ResourceVersion:"21651", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701246641, loc:(*time.Location)(0xa0c0b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"cddb65674"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", Controller:(*bool)(0xc013b73a5a), BlockOwnerDeletion:(*bool)(0xc013b73a5b)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc013b73b20), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc012fc8a20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc013b73b28), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0812 22:44:01.857373  107866 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:9, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:01.857506  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.083536ms) 409 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:56808]
I0812 22:44:01.857724  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (9.979277ms)
I0812 22:44:01.857744  107866 deployment_controller.go:484] Error syncing deployment test-deployment-available-condition/deployment: Operation cannot be fulfilled on deployments.apps "deployment": the object has been modified; please apply your changes to the latest version and try again
I0812 22:44:01.857774  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:01.857769068 +0000 UTC m=+159.205814099)
I0812 22:44:01.858210  107866 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-08-12 22:44:01 +0000 UTC - now: 2019-08-12 22:44:01.858202574 +0000 UTC m=+159.206247619]
I0812 22:44:01.859164  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (1.688443ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56810]
I0812 22:44:01.860094  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.581534ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56806]
I0812 22:44:01.860128  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.341992ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56808]
I0812 22:44:01.860062  107866 replica_set.go:275] Pod deployment-cddb65674-7vs5r created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-cddb65674-7vs5r", GenerateName:"deployment-cddb65674-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-7vs5r", UID:"cb6630a6-7aba-4905-9875-4326d520323f", ResourceVersion:"21653", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701246641, loc:(*time.Location)(0xa0c0b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"cddb65674"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", Controller:(*bool)(0xc013a32d9a), BlockOwnerDeletion:(*bool)(0xc013a32d9b)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc013a32e10), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc013004ea0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc013a32e18), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0812 22:44:01.860195  107866 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:8, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:01.860314  107866 controller_utils.go:589] Controller deployment-cddb65674 created pod deployment-cddb65674-n8ljs
I0812 22:44:01.860230  107866 replica_set.go:275] Pod deployment-cddb65674-n8ljs created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-cddb65674-n8ljs", GenerateName:"deployment-cddb65674-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-n8ljs", UID:"752c2b2b-5953-4b55-9741-6a4140028d26", ResourceVersion:"21654", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701246641, loc:(*time.Location)(0xa0c0b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"cddb65674"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", Controller:(*bool)(0xc013a3307a), BlockOwnerDeletion:(*bool)(0xc013a3307b)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc013a330f0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc013004f00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc013a330f8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0812 22:44:01.860335  107866 controller_utils.go:589] Controller deployment-cddb65674 created pod deployment-cddb65674-7vs5r
I0812 22:44:01.860337  107866 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:7, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:01.860360  107866 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", APIVersion:"apps/v1", ResourceVersion:"21648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-cddb65674-n8ljs
I0812 22:44:01.860673  107866 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", APIVersion:"apps/v1", ResourceVersion:"21648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-cddb65674-7vs5r
I0812 22:44:01.862030  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (3.488823ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:56812]
I0812 22:44:01.862194  107866 deployment_controller.go:175] Updating deployment deployment
I0812 22:44:01.862480  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (4.706357ms)
I0812 22:44:01.862508  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:01.862503222 +0000 UTC m=+159.210548254)
I0812 22:44:01.862567  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (1.687232ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56814]
I0812 22:44:01.862698  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (1.860345ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56810]
I0812 22:44:01.862729  107866 controller_utils.go:589] Controller deployment-cddb65674 created pod deployment-cddb65674-qxdmx
I0812 22:44:01.862751  107866 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", APIVersion:"apps/v1", ResourceVersion:"21648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-cddb65674-qxdmx
I0812 22:44:01.862864  107866 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-08-12 22:44:01 +0000 UTC - now: 2019-08-12 22:44:01.862857043 +0000 UTC m=+159.210902074]
I0812 22:44:01.862900  107866 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0812 22:44:01.862912  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (406.997µs)
I0812 22:44:01.862918  107866 controller_utils.go:589] Controller deployment-cddb65674 created pod deployment-cddb65674-rsm8r
I0812 22:44:01.862927  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:01.862923081 +0000 UTC m=+159.210968112)
I0812 22:44:01.862946  107866 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", APIVersion:"apps/v1", ResourceVersion:"21648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-cddb65674-rsm8r
I0812 22:44:01.863067  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.188297ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56806]
I0812 22:44:01.863029  107866 replica_set.go:275] Pod deployment-cddb65674-qxdmx created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-cddb65674-qxdmx", GenerateName:"deployment-cddb65674-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-qxdmx", UID:"a16ee69a-5732-4714-a826-d82fa144d903", ResourceVersion:"21656", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701246641, loc:(*time.Location)(0xa0c0b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"cddb65674"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", Controller:(*bool)(0xc01395ac7a), BlockOwnerDeletion:(*bool)(0xc01395ac7b)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc01395acf0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc012e8d020), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc01395acf8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0812 22:44:01.863155  107866 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:6, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:01.863196  107866 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-08-12 22:44:01 +0000 UTC - now: 2019-08-12 22:44:01.863192453 +0000 UTC m=+159.211237483]
I0812 22:44:01.863222  107866 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0812 22:44:01.863175  107866 replica_set.go:275] Pod deployment-cddb65674-rsm8r created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-cddb65674-rsm8r", GenerateName:"deployment-cddb65674-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-rsm8r", UID:"44fb5085-6be8-4971-9588-5ddb4911f5db", ResourceVersion:"21658", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701246641, loc:(*time.Location)(0xa0c0b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"cddb65674"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", Controller:(*bool)(0xc01395affa), BlockOwnerDeletion:(*bool)(0xc01395affb)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc01395b070), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc012e8d1a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc01395b078), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0812 22:44:01.863243  107866 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:5, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:01.863231  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (305.403µs)
I0812 22:44:01.864232  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.765401ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56818]
I0812 22:44:01.864485  107866 controller_utils.go:589] Controller deployment-cddb65674 created pod deployment-cddb65674-vpdzz
I0812 22:44:01.864527  107866 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", APIVersion:"apps/v1", ResourceVersion:"21648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-cddb65674-vpdzz
I0812 22:44:01.865029  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (3.520286ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56816]
I0812 22:44:01.865391  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (1.407307ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56814]
I0812 22:44:01.865085  107866 replica_set.go:275] Pod deployment-cddb65674-vpdzz created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-cddb65674-vpdzz", GenerateName:"deployment-cddb65674-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-vpdzz", UID:"a46ca48a-c5f7-47aa-9d22-26fdb5f1a7bd", ResourceVersion:"21659", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701246641, loc:(*time.Location)(0xa0c0b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"cddb65674"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", Controller:(*bool)(0xc013612a9a), BlockOwnerDeletion:(*bool)(0xc013612a9b)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc013612b10), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc012fbf6e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc013612b18), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0812 22:44:01.865932  107866 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:4, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:01.866009  107866 controller_utils.go:589] Controller deployment-cddb65674 created pod deployment-cddb65674-btlgk
I0812 22:44:01.866093  107866 replica_set.go:275] Pod deployment-cddb65674-btlgk created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-cddb65674-btlgk", GenerateName:"deployment-cddb65674-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-btlgk", UID:"891f13b8-3bd4-4a3d-981c-daf2ac33b93e", ResourceVersion:"21660", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701246641, loc:(*time.Location)(0xa0c0b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"cddb65674"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", Controller:(*bool)(0xc013612d7a), BlockOwnerDeletion:(*bool)(0xc013612d7b)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc013612df0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc012fbf740), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc013612df8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0812 22:44:01.866449  107866 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:3, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:01.866329  107866 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", APIVersion:"apps/v1", ResourceVersion:"21648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-cddb65674-btlgk
I0812 22:44:01.953090  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.046049ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56818]
I0812 22:44:02.051834  107866 request.go:538] Throttling request took 186.113181ms, request: POST:http://127.0.0.1:39657/api/v1/namespaces/test-deployment-available-condition/events
I0812 22:44:02.052583  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.597006ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56818]
I0812 22:44:02.055216  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.512563ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56812]
I0812 22:44:02.153275  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.149313ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.251823  107866 request.go:538] Throttling request took 385.339663ms, request: POST:http://127.0.0.1:39657/api/v1/namespaces/test-deployment-available-condition/pods
I0812 22:44:02.257795  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (5.707904ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56818]
I0812 22:44:02.257820  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (6.838358ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.258046  107866 controller_utils.go:589] Controller deployment-cddb65674 created pod deployment-cddb65674-jkqxx
I0812 22:44:02.258080  107866 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", APIVersion:"apps/v1", ResourceVersion:"21648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-cddb65674-jkqxx
I0812 22:44:02.257966  107866 replica_set.go:275] Pod deployment-cddb65674-jkqxx created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-cddb65674-jkqxx", GenerateName:"deployment-cddb65674-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-jkqxx", UID:"6a771d73-1b61-451a-ad9a-a32646a6a53b", ResourceVersion:"21667", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701246642, loc:(*time.Location)(0xa0c0b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"cddb65674"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", Controller:(*bool)(0xc01466b7ea), BlockOwnerDeletion:(*bool)(0xc01466b7eb)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc01466b860), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0145993e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc01466b868), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0812 22:44:02.258132  107866 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:2, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.353048  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.850757ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.451868  107866 request.go:538] Throttling request took 585.268348ms, request: POST:http://127.0.0.1:39657/api/v1/namespaces/test-deployment-available-condition/pods
I0812 22:44:02.453717  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.995135ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.454367  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.272462ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56818]
I0812 22:44:02.454656  107866 controller_utils.go:589] Controller deployment-cddb65674 created pod deployment-cddb65674-zlpcc
I0812 22:44:02.454557  107866 replica_set.go:275] Pod deployment-cddb65674-zlpcc created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-cddb65674-zlpcc", GenerateName:"deployment-cddb65674-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-zlpcc", UID:"1a71d3ad-19de-400f-9a60-81fdf67d0dcf", ResourceVersion:"21673", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701246642, loc:(*time.Location)(0xa0c0b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"cddb65674"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", Controller:(*bool)(0xc0135af3fa), BlockOwnerDeletion:(*bool)(0xc0135af3fb)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0135af470), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc012e6d740), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0135af478), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0812 22:44:02.454698  107866 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.454709  107866 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", APIVersion:"apps/v1", ResourceVersion:"21648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-cddb65674-zlpcc
I0812 22:44:02.553228  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.783128ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56818]
I0812 22:44:02.651866  107866 request.go:538] Throttling request took 785.152978ms, request: POST:http://127.0.0.1:39657/api/v1/namespaces/test-deployment-available-condition/pods
I0812 22:44:02.653232  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.182894ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56818]
I0812 22:44:02.654688  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.163498ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56812]
I0812 22:44:02.654978  107866 controller_utils.go:589] Controller deployment-cddb65674 created pod deployment-cddb65674-xqllz
I0812 22:44:02.655046  107866 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-cddb65674, replicas 0->0 (need 10), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0812 22:44:02.654927  107866 replica_set.go:275] Pod deployment-cddb65674-xqllz created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-cddb65674-xqllz", GenerateName:"deployment-cddb65674-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-xqllz", UID:"a76220db-ced3-4bbe-8bad-df13b4a2c444", ResourceVersion:"21681", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701246642, loc:(*time.Location)(0xa0c0b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"cddb65674"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", Controller:(*bool)(0xc01357d11a), BlockOwnerDeletion:(*bool)(0xc01357d11b)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc01357d190), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc012dcd1a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc01357d198), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0812 22:44:02.655067  107866 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.655106  107866 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-cddb65674", UID:"4d6aa9fc-1e04-4702-ac22-29621fd05a73", APIVersion:"apps/v1", ResourceVersion:"21648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-cddb65674-xqllz
I0812 22:44:02.657478  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674/status: (2.150788ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56812]
I0812 22:44:02.657543  107866 deployment_controller.go:280] ReplicaSet deployment-cddb65674 updated.
I0812 22:44:02.657598  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:02.657570768 +0000 UTC m=+160.005615785)
I0812 22:44:02.657808  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (806.389155ms)
I0812 22:44:02.657843  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.657927  107866 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-cddb65674, replicas 0->10 (need 10), fullyLabeledReplicas 0->10, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0812 22:44:02.658069  107866 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-08-12 22:44:01 +0000 UTC - now: 2019-08-12 22:44:02.65806345 +0000 UTC m=+160.006108486]
I0812 22:44:02.658111  107866 progress.go:193] Queueing up deployment "deployment" for a progress check after 7198s
I0812 22:44:02.658126  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (552.28µs)
I0812 22:44:02.660692  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674/status: (2.472119ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56812]
I0812 22:44:02.660940  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (3.097235ms)
I0812 22:44:02.661067  107866 deployment_controller.go:280] ReplicaSet deployment-cddb65674 updated.
I0812 22:44:02.661073  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.661115  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:02.661098569 +0000 UTC m=+160.009143602)
I0812 22:44:02.661203  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (157.887µs)
I0812 22:44:02.663827  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.047351ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:56812]
I0812 22:44:02.664158  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (3.053602ms)
I0812 22:44:02.664942  107866 deployment_controller.go:175] Updating deployment deployment
I0812 22:44:02.664997  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:02.664972345 +0000 UTC m=+160.013017359)
I0812 22:44:02.665385  107866 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-08-12 22:44:02 +0000 UTC - now: 2019-08-12 22:44:02.66538063 +0000 UTC m=+160.013425645]
I0812 22:44:02.665430  107866 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0812 22:44:02.665451  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (476.255µs)
I0812 22:44:02.752387  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.489894ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.754296  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.331642ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.756160  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.393078ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.758586  107866 httplog.go:90] GET /api/v1/namespaces/test-deployment-available-condition/pods?labelSelector=name%3Dtest: (2.022443ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.760499  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.212094ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.851858  107866 request.go:538] Throttling request took 796.162315ms, request: POST:http://127.0.0.1:39657/api/v1/namespaces/test-deployment-available-condition/events
I0812 22:44:02.860704  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (4.933616ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56812]
I0812 22:44:02.944618  107866 request.go:538] Throttling request took 183.736373ms, request: GET:http://127.0.0.1:39657/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets?labelSelector=name%3Dtest
I0812 22:44:02.946849  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets?labelSelector=name%3Dtest: (1.905977ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.952466  107866 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-7vs5r/status: (3.871999ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.952722  107866 replica_set.go:338] Pod deployment-cddb65674-7vs5r updated, objectMeta {Name:deployment-cddb65674-7vs5r GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-7vs5r UID:cb6630a6-7aba-4905-9875-4326d520323f ResourceVersion:21653 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc013a32d9a BlockOwnerDeletion:0xc013a32d9b}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-cddb65674-7vs5r GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-7vs5r UID:cb6630a6-7aba-4905-9875-4326d520323f ResourceVersion:21737 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc013348a7a BlockOwnerDeletion:0xc013348a7b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0812 22:44:02.952935  107866 replica_set.go:348] ReplicaSet "deployment-cddb65674" will be enqueued after 3600s for availability check
I0812 22:44:02.953019  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.953168  107866 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-cddb65674, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 0->1, availableReplicas 0->0, sequence No: 1->1
I0812 22:44:02.956198  107866 deployment_controller.go:280] ReplicaSet deployment-cddb65674 updated.
I0812 22:44:02.956263  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:02.956228432 +0000 UTC m=+160.304273453)
I0812 22:44:02.956450  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674/status: (3.051277ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56818]
I0812 22:44:02.957194  107866 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-9dpjk/status: (4.11885ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.957475  107866 replica_set.go:338] Pod deployment-cddb65674-9dpjk updated, objectMeta {Name:deployment-cddb65674-9dpjk GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-9dpjk UID:03582a8d-5b72-408e-9efe-16d5e1e1a862 ResourceVersion:21651 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc013b73a5a BlockOwnerDeletion:0xc013b73a5b}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-cddb65674-9dpjk GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-9dpjk UID:03582a8d-5b72-408e-9efe-16d5e1e1a862 ResourceVersion:21739 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc01330eb8a BlockOwnerDeletion:0xc01330eb8b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0812 22:44:02.957569  107866 replica_set.go:348] ReplicaSet "deployment-cddb65674" will be enqueued after 3600s for availability check
I0812 22:44:02.957797  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (4.803651ms)
I0812 22:44:02.957832  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.957942  107866 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-cddb65674, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 1->2, availableReplicas 0->0, sequence No: 1->1
I0812 22:44:02.962363  107866 replica_set.go:338] Pod deployment-cddb65674-btlgk updated, objectMeta {Name:deployment-cddb65674-btlgk GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-btlgk UID:891f13b8-3bd4-4a3d-981c-daf2ac33b93e ResourceVersion:21660 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc013612d7a BlockOwnerDeletion:0xc013612d7b}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-cddb65674-btlgk GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-btlgk UID:891f13b8-3bd4-4a3d-981c-daf2ac33b93e ResourceVersion:21740 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc0131ce11a BlockOwnerDeletion:0xc0131ce11b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0812 22:44:02.962480  107866 replica_set.go:348] ReplicaSet "deployment-cddb65674" will be enqueued after 3600s for availability check
I0812 22:44:02.964160  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674/status: (5.689743ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56818]
I0812 22:44:02.964160  107866 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-btlgk/status: (6.099296ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.965024  107866 deployment_controller.go:280] ReplicaSet deployment-cddb65674 updated.
I0812 22:44:02.965863  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (8.033167ms)
I0812 22:44:02.965908  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.966028  107866 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-cddb65674, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 2->3, availableReplicas 0->0, sequence No: 1->1
I0812 22:44:02.972223  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (14.877292ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:57110]
I0812 22:44:02.972223  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674/status: (4.397532ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56818]
I0812 22:44:02.972588  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (6.681963ms)
I0812 22:44:02.972835  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (16.600107ms)
I0812 22:44:02.972867  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:02.972862357 +0000 UTC m=+160.320907383)
I0812 22:44:02.973038  107866 deployment_controller.go:175] Updating deployment deployment
I0812 22:44:02.973287  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.973374  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (93.335µs)
I0812 22:44:02.973422  107866 deployment_controller.go:280] ReplicaSet deployment-cddb65674 updated.
I0812 22:44:02.973667  107866 replica_set.go:338] Pod deployment-cddb65674-jkqxx updated, objectMeta {Name:deployment-cddb65674-jkqxx GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-jkqxx UID:6a771d73-1b61-451a-ad9a-a32646a6a53b ResourceVersion:21667 Generation:0 CreationTimestamp:2019-08-12 22:44:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc01466b7ea BlockOwnerDeletion:0xc01466b7eb}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-cddb65674-jkqxx GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-jkqxx UID:6a771d73-1b61-451a-ad9a-a32646a6a53b ResourceVersion:21743 Generation:0 CreationTimestamp:2019-08-12 22:44:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc01327577a BlockOwnerDeletion:0xc01327577b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0812 22:44:02.973766  107866 replica_set.go:348] ReplicaSet "deployment-cddb65674" will be enqueued after 3600s for availability check
I0812 22:44:02.973805  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.973889  107866 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-cddb65674, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 3->4, availableReplicas 0->0, sequence No: 1->1
I0812 22:44:02.974504  107866 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-jkqxx/status: (9.329955ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.975931  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674/status: (1.818012ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:56818]
I0812 22:44:02.976186  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (2.383371ms)
I0812 22:44:02.976307  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.976427  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (145.516µs)
I0812 22:44:02.976464  107866 deployment_controller.go:280] ReplicaSet deployment-cddb65674 updated.
I0812 22:44:02.979032  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (5.499271ms) 409 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:57110]
I0812 22:44:02.979236  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (6.368144ms)
I0812 22:44:02.979266  107866 deployment_controller.go:484] Error syncing deployment test-deployment-available-condition/deployment: Operation cannot be fulfilled on deployments.apps "deployment": the object has been modified; please apply your changes to the latest version and try again
I0812 22:44:02.979297  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:02.97929189 +0000 UTC m=+160.327336921)
I0812 22:44:02.979331  107866 replica_set.go:338] Pod deployment-cddb65674-n8ljs updated, objectMeta {Name:deployment-cddb65674-n8ljs GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-n8ljs UID:752c2b2b-5953-4b55-9741-6a4140028d26 ResourceVersion:21654 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc013a3307a BlockOwnerDeletion:0xc013a3307b}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-cddb65674-n8ljs GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-n8ljs UID:752c2b2b-5953-4b55-9741-6a4140028d26 ResourceVersion:21747 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc0133f3e3a BlockOwnerDeletion:0xc0133f3e3b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0812 22:44:02.979453  107866 replica_set.go:348] ReplicaSet "deployment-cddb65674" will be enqueued after 3600s for availability check
I0812 22:44:02.979509  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.979634  107866 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-cddb65674, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 4->5, availableReplicas 0->0, sequence No: 1->1
I0812 22:44:02.981021  107866 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-n8ljs/status: (4.133552ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56818]
I0812 22:44:02.982099  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674/status: (2.198173ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:57110]
I0812 22:44:02.982316  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (2.828799ms)
I0812 22:44:02.982390  107866 deployment_controller.go:280] ReplicaSet deployment-cddb65674 updated.
I0812 22:44:02.982384  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.982495  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (118.1µs)
I0812 22:44:02.983219  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (3.31776ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:56812]
I0812 22:44:02.983351  107866 deployment_controller.go:175] Updating deployment deployment
I0812 22:44:02.983519  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (4.224137ms)
I0812 22:44:02.983569  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:02.983565213 +0000 UTC m=+160.331610230)
I0812 22:44:02.988579  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (3.116509ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:56812]
I0812 22:44:02.988673  107866 deployment_controller.go:175] Updating deployment deployment
I0812 22:44:02.988919  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (5.347357ms)
I0812 22:44:02.988955  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:02.988950968 +0000 UTC m=+160.336996004)
I0812 22:44:02.989323  107866 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-08-12 22:44:02 +0000 UTC - now: 2019-08-12 22:44:02.98931587 +0000 UTC m=+160.337360905]
I0812 22:44:02.989362  107866 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0812 22:44:02.989376  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (423.01µs)
I0812 22:44:02.989569  107866 replica_set.go:338] Pod deployment-cddb65674-qxdmx updated, objectMeta {Name:deployment-cddb65674-qxdmx GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-qxdmx UID:a16ee69a-5732-4714-a826-d82fa144d903 ResourceVersion:21656 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc01395ac7a BlockOwnerDeletion:0xc01395ac7b}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-cddb65674-qxdmx GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-qxdmx UID:a16ee69a-5732-4714-a826-d82fa144d903 ResourceVersion:21750 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc0131cff7a BlockOwnerDeletion:0xc0131cff7b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0812 22:44:02.989690  107866 replica_set.go:348] ReplicaSet "deployment-cddb65674" will be enqueued after 3600s for availability check
I0812 22:44:02.989731  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:02.989771  107866 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-qxdmx/status: (6.412024ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56818]
I0812 22:44:02.989846  107866 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-cddb65674, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 5->6, availableReplicas 0->0, sequence No: 1->1
I0812 22:44:02.994788  107866 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-rsm8r/status: (4.491024ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:02.994987  107866 replica_set.go:338] Pod deployment-cddb65674-rsm8r updated, objectMeta {Name:deployment-cddb65674-rsm8r GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-rsm8r UID:44fb5085-6be8-4971-9588-5ddb4911f5db ResourceVersion:21658 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc01395affa BlockOwnerDeletion:0xc01395affb}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-cddb65674-rsm8r GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-rsm8r UID:44fb5085-6be8-4971-9588-5ddb4911f5db ResourceVersion:21752 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc012e7a72a BlockOwnerDeletion:0xc012e7a72b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0812 22:44:02.995092  107866 replica_set.go:348] ReplicaSet "deployment-cddb65674" will be enqueued after 3600s for availability check
I0812 22:44:03.000272  107866 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-vpdzz/status: (4.909752ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:03.000992  107866 replica_set.go:338] Pod deployment-cddb65674-vpdzz updated, objectMeta {Name:deployment-cddb65674-vpdzz GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-vpdzz UID:a46ca48a-c5f7-47aa-9d22-26fdb5f1a7bd ResourceVersion:21659 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc013612a9a BlockOwnerDeletion:0xc013612a9b}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-cddb65674-vpdzz GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-vpdzz UID:a46ca48a-c5f7-47aa-9d22-26fdb5f1a7bd ResourceVersion:21753 Generation:0 CreationTimestamp:2019-08-12 22:44:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc012fcfb3a BlockOwnerDeletion:0xc012fcfb3b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0812 22:44:03.001102  107866 replica_set.go:348] ReplicaSet "deployment-cddb65674" will be enqueued after 3600s for availability check
I0812 22:44:03.001305  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674/status: (5.912997ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:57110]
I0812 22:44:03.001547  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (11.82022ms)
I0812 22:44:03.001588  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:03.001746  107866 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-cddb65674, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 5->8, availableReplicas 0->0, sequence No: 1->1
I0812 22:44:03.002816  107866 deployment_controller.go:280] ReplicaSet deployment-cddb65674 updated.
I0812 22:44:03.002846  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:03.002841176 +0000 UTC m=+160.350886191)
I0812 22:44:03.003852  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674/status: (1.828328ms) 409 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:57110]
I0812 22:44:03.005372  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674: (1.143209ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:57110]
I0812 22:44:03.006106  107866 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-cddb65674, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 6->8, availableReplicas 0->0, sequence No: 1->1
I0812 22:44:03.006711  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.756547ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:57112]
I0812 22:44:03.006809  107866 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-xqllz/status: (3.734606ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:03.007135  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (4.287437ms)
I0812 22:44:03.007455  107866 deployment_controller.go:175] Updating deployment deployment
I0812 22:44:03.007508  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:03.007502309 +0000 UTC m=+160.355547334)
I0812 22:44:03.007481  107866 replica_set.go:338] Pod deployment-cddb65674-xqllz updated, objectMeta {Name:deployment-cddb65674-xqllz GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-xqllz UID:a76220db-ced3-4bbe-8bad-df13b4a2c444 ResourceVersion:21681 Generation:0 CreationTimestamp:2019-08-12 22:44:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc01357d11a BlockOwnerDeletion:0xc01357d11b}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-cddb65674-xqllz GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-xqllz UID:a76220db-ced3-4bbe-8bad-df13b4a2c444 ResourceVersion:21757 Generation:0 CreationTimestamp:2019-08-12 22:44:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc012f677ca BlockOwnerDeletion:0xc012f677cb}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0812 22:44:03.007634  107866 replica_set.go:348] ReplicaSet "deployment-cddb65674" will be enqueued after 3600s for availability check
I0812 22:44:03.007916  107866 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-08-12 22:44:03 +0000 UTC - now: 2019-08-12 22:44:03.007909921 +0000 UTC m=+160.355954954]
I0812 22:44:03.007960  107866 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0812 22:44:03.007975  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (469.397µs)
I0812 22:44:03.009473  107866 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-zlpcc/status: (2.173273ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57112]
I0812 22:44:03.009825  107866 replica_set.go:338] Pod deployment-cddb65674-zlpcc updated, objectMeta {Name:deployment-cddb65674-zlpcc GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-zlpcc UID:1a71d3ad-19de-400f-9a60-81fdf67d0dcf ResourceVersion:21673 Generation:0 CreationTimestamp:2019-08-12 22:44:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc0135af3fa BlockOwnerDeletion:0xc0135af3fb}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-cddb65674-zlpcc GenerateName:deployment-cddb65674- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-cddb65674-zlpcc UID:1a71d3ad-19de-400f-9a60-81fdf67d0dcf ResourceVersion:21759 Generation:0 CreationTimestamp:2019-08-12 22:44:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:cddb65674] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-cddb65674 UID:4d6aa9fc-1e04-4702-ac22-29621fd05a73 Controller:0xc012f3937a BlockOwnerDeletion:0xc012f3937b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0812 22:44:03.009924  107866 replica_set.go:348] ReplicaSet "deployment-cddb65674" will be enqueued after 3600s for availability check
I0812 22:44:03.010582  107866 deployment_controller.go:280] ReplicaSet deployment-cddb65674 updated.
I0812 22:44:03.010638  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:03.010633015 +0000 UTC m=+160.358678048)
I0812 22:44:03.012779  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674/status: (6.37049ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:57110]
I0812 22:44:03.013046  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (11.460179ms)
I0812 22:44:03.013097  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:03.013249  107866 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-cddb65674, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 8->10, availableReplicas 0->0, sequence No: 1->1
I0812 22:44:03.013982  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.660487ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:57112]
I0812 22:44:03.014398  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (3.757975ms)
I0812 22:44:03.014897  107866 deployment_controller.go:175] Updating deployment deployment
I0812 22:44:03.014933  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:03.014928085 +0000 UTC m=+160.362973124)
I0812 22:44:03.015462  107866 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-08-12 22:44:03 +0000 UTC - now: 2019-08-12 22:44:03.015451814 +0000 UTC m=+160.363496838]
I0812 22:44:03.015516  107866 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0812 22:44:03.015537  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (604.201µs)
I0812 22:44:03.051865  107866 request.go:538] Throttling request took 190.725313ms, request: POST:http://127.0.0.1:39657/api/v1/namespaces/test-deployment-available-condition/events
I0812 22:44:03.054417  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.111926ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:57110]
I0812 22:44:03.057663  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674/status: (2.181881ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:57110]
I0812 22:44:03.058037  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (44.941798ms)
I0812 22:44:03.058327  107866 deployment_controller.go:280] ReplicaSet deployment-cddb65674 updated.
I0812 22:44:03.058373  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:03.058367933 +0000 UTC m=+160.406412959)
I0812 22:44:03.058326  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:03.058774  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (456.826µs)
I0812 22:44:03.061294  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.177265ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:57110]
I0812 22:44:03.061583  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (3.209414ms)
I0812 22:44:03.061993  107866 deployment_controller.go:175] Updating deployment deployment
I0812 22:44:03.062027  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:03.06202315 +0000 UTC m=+160.410068166)
I0812 22:44:03.062324  107866 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-08-12 22:44:03 +0000 UTC - now: 2019-08-12 22:44:03.062319886 +0000 UTC m=+160.410364917]
I0812 22:44:03.062363  107866 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0812 22:44:03.062380  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (355.64µs)
I0812 22:44:03.144455  107866 request.go:538] Throttling request took 134.247564ms, request: GET:http://127.0.0.1:39657/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0812 22:44:03.148408  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (3.667178ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57110]
I0812 22:44:03.251813  107866 request.go:538] Throttling request took 196.867112ms, request: POST:http://127.0.0.1:39657/api/v1/namespaces/test-deployment-available-condition/events
I0812 22:44:03.255139  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (3.049706ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:57110]
I0812 22:44:03.345718  107866 request.go:538] Throttling request took 196.571506ms, request: GET:http://127.0.0.1:39657/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0812 22:44:03.348072  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.909368ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57110]
I0812 22:44:03.451906  107866 request.go:538] Throttling request took 196.381918ms, request: POST:http://127.0.0.1:39657/api/v1/namespaces/test-deployment-available-condition/events
I0812 22:44:03.454789  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.566974ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:57110]
I0812 22:44:03.544462  107866 request.go:538] Throttling request took 195.695497ms, request: GET:http://127.0.0.1:39657/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0812 22:44:03.547540  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.764353ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57110]
I0812 22:44:03.651868  107866 request.go:538] Throttling request took 196.669301ms, request: POST:http://127.0.0.1:39657/api/v1/namespaces/test-deployment-available-condition/events
I0812 22:44:03.655013  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.847596ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:57110]
I0812 22:44:03.744472  107866 request.go:538] Throttling request took 196.457376ms, request: GET:http://127.0.0.1:39657/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0812 22:44:03.746344  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.537365ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57110]
I0812 22:44:03.851859  107866 request.go:538] Throttling request took 196.384322ms, request: POST:http://127.0.0.1:39657/api/v1/namespaces/test-deployment-available-condition/events
I0812 22:44:03.854453  107866 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.28009ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:57110]
I0812 22:44:03.944512  107866 request.go:538] Throttling request took 197.588814ms, request: PUT:http://127.0.0.1:39657/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0812 22:44:03.948066  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (3.207506ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57110]
I0812 22:44:03.948745  107866 deployment_controller.go:175] Updating deployment deployment
I0812 22:44:03.948790  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:03.948783566 +0000 UTC m=+161.296828581)
I0812 22:44:03.951932  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674: (2.610516ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:57110]
I0812 22:44:03.952044  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:03.952175  107866 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-cddb65674, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 10->10, availableReplicas 0->8, sequence No: 1->2
I0812 22:44:03.952414  107866 deployment_controller.go:280] ReplicaSet deployment-cddb65674 updated.
I0812 22:44:03.953943  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674: (1.389164ms) 409 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:56812]
I0812 22:44:03.954151  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (5.36259ms)
I0812 22:44:03.954185  107866 deployment_controller.go:484] Error syncing deployment test-deployment-available-condition/deployment: Operation cannot be fulfilled on replicasets.apps "deployment-cddb65674": the object has been modified; please apply your changes to the latest version and try again
I0812 22:44:03.954215  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:03.954211477 +0000 UTC m=+161.302256497)
I0812 22:44:03.954584  107866 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-08-12 22:44:03 +0000 UTC - now: 2019-08-12 22:44:03.954578155 +0000 UTC m=+161.302623186]
I0812 22:44:03.955546  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-cddb65674/status: (3.052227ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:57110]
I0812 22:44:03.955605  107866 deployment_controller.go:280] ReplicaSet deployment-cddb65674 updated.
I0812 22:44:03.955807  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (3.775397ms)
I0812 22:44:03.955847  107866 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-cddb65674", timestamp:time.Time{wall:0xbf4c98cc72c048d7, ext:159199507399, loc:(*time.Location)(0xa0c0b80)}}
I0812 22:44:03.955957  107866 replica_set.go:566] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-cddb65674" (116.155µs)
I0812 22:44:03.956994  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.093943ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:56812]
I0812 22:44:03.957279  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (3.058961ms)
I0812 22:44:03.957323  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:03.957318125 +0000 UTC m=+161.305363149)
I0812 22:44:03.957933  107866 deployment_controller.go:175] Updating deployment deployment
I0812 22:44:03.960014  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.052094ms) 409 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:56812]
I0812 22:44:03.960264  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (2.940196ms)
I0812 22:44:03.960288  107866 deployment_controller.go:484] Error syncing deployment test-deployment-available-condition/deployment: Operation cannot be fulfilled on deployments.apps "deployment": the object has been modified; please apply your changes to the latest version and try again
I0812 22:44:03.960314  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:03.960310167 +0000 UTC m=+161.308355196)
I0812 22:44:03.964372  107866 deployment_controller.go:175] Updating deployment deployment
I0812 22:44:03.964452  107866 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (3.365011ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:56812]
I0812 22:44:03.964792  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (4.475313ms)
I0812 22:44:03.964834  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:03.964828913 +0000 UTC m=+161.312873934)
I0812 22:44:03.965427  107866 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-08-12 22:44:03 +0000 UTC - now: 2019-08-12 22:44:03.96541964 +0000 UTC m=+161.313464671]
I0812 22:44:03.965472  107866 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0812 22:44:03.965497  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (665.415µs)
I0812 22:44:03.965515  107866 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-08-12 22:44:03.965511724 +0000 UTC m=+161.313556739)
I0812 22:44:03.965844  107866 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-08-12 22:44:03 +0000 UTC - now: 2019-08-12 22:44:03.965839969 +0000 UTC m=+161.313884989]
I0812 22:44:03.965871  107866 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I0812 22:44:03.965891  107866 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (376.82µs)
I0812 22:44:04.144440  107866 request.go:538] Throttling request took 195.939111ms, request: GET:http://127.0.0.1:39657/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0812 22:44:04.146951  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.222062ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:04.344570  107866 request.go:538] Throttling request took 197.202924ms, request: GET:http://127.0.0.1:39657/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0812 22:44:04.347347  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.483595ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:04.544489  107866 request.go:538] Throttling request took 196.630673ms, request: GET:http://127.0.0.1:39657/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0812 22:44:04.547014  107866 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.11032ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:04.547444  107866 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0812 22:44:04.547811  107866 deployment_controller.go:164] Shutting down deployment controller
I0812 22:44:04.547845  107866 replica_set.go:194] Shutting down replicaset controller
I0812 22:44:04.548195  107866 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=21304&timeout=7m59s&timeoutSeconds=479&watch=true: (2.799261869s) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:56602]
I0812 22:44:04.548295  107866 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=21306&timeout=5m14s&timeoutSeconds=314&watch=true: (2.79912591s) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:56802]
I0812 22:44:04.548373  107866 httplog.go:90] GET /apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=21643&timeout=6m35s&timeoutSeconds=395&watch=true: (2.79747911s) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:56804]
I0812 22:44:04.554201  107866 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (5.724949ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0812 22:44:04.557911  107866 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.47757ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
--- FAIL: TestDeploymentAvailableCondition (6.33s)
    deployment.go:268: Updating deployment deployment
    deployment_test.go:989: unexpected .replicas: expect 10, got 8

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190812-223853.xml

Find deployment-cddb65674-9dpjk mentions in log files | View test history on testgrid


Show 2469 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 761 lines ...
W0812 22:33:34.930] I0812 22:33:34.825023   53188 taint_manager.go:162] Sending events to api server.
W0812 22:33:34.930] I0812 22:33:34.825152   53188 node_lifecycle_controller.go:418] Controller will reconcile labels.
W0812 22:33:34.930] I0812 22:33:34.825178   53188 node_lifecycle_controller.go:431] Controller will taint node by condition.
W0812 22:33:34.931] I0812 22:33:34.825195   53188 controllermanager.go:535] Started "nodelifecycle"
W0812 22:33:34.931] I0812 22:33:34.825292   53188 node_lifecycle_controller.go:455] Starting node controller
W0812 22:33:34.931] I0812 22:33:34.825322   53188 controller_utils.go:1029] Waiting for caches to sync for taint controller
W0812 22:33:34.931] E0812 22:33:34.825554   53188 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0812 22:33:34.931] W0812 22:33:34.825591   53188 controllermanager.go:527] Skipping "service"
W0812 22:33:34.931] I0812 22:33:34.825600   53188 core.go:185] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0812 22:33:34.931] W0812 22:33:34.825605   53188 controllermanager.go:527] Skipping "route"
W0812 22:33:34.932] I0812 22:33:34.825883   53188 controllermanager.go:535] Started "clusterrole-aggregation"
W0812 22:33:34.932] W0812 22:33:34.825893   53188 controllermanager.go:527] Skipping "root-ca-cert-publisher"
W0812 22:33:34.932] I0812 22:33:34.826008   53188 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
... skipping 23 lines ...
W0812 22:33:35.137] I0812 22:33:35.136377   53188 disruption.go:333] Starting disruption controller
W0812 22:33:35.137] I0812 22:33:35.136524   53188 controller_utils.go:1029] Waiting for caches to sync for disruption controller
W0812 22:33:35.143] I0812 22:33:35.143095   53188 controllermanager.go:535] Started "namespace"
W0812 22:33:35.144] I0812 22:33:35.143205   53188 namespace_controller.go:186] Starting namespace controller
W0812 22:33:35.144] I0812 22:33:35.143242   53188 controller_utils.go:1029] Waiting for caches to sync for namespace controller
W0812 22:33:35.144] I0812 22:33:35.143422   53188 node_lifecycle_controller.go:77] Sending events to api server
W0812 22:33:35.145] E0812 22:33:35.143467   53188 core.go:175] failed to start cloud node lifecycle controller: no cloud provider provided
W0812 22:33:35.145] W0812 22:33:35.143475   53188 controllermanager.go:527] Skipping "cloud-node-lifecycle"
W0812 22:33:35.161] W0812 22:33:35.160895   53188 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0812 22:33:35.168] I0812 22:33:35.168056   53188 controller_utils.go:1036] Caches are synced for TTL controller
W0812 22:33:35.200] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0812 22:33:35.224] I0812 22:33:35.223675   53188 controller_utils.go:1036] Caches are synced for PV protection controller
W0812 22:33:35.226] I0812 22:33:35.226268   53188 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0812 22:33:35.245] E0812 22:33:35.244855   53188 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0812 22:33:35.327] I0812 22:33:35.327145   53188 controller_utils.go:1036] Caches are synced for certificate controller
W0812 22:33:35.422] I0812 22:33:35.421570   53188 controller_utils.go:1036] Caches are synced for service account controller
W0812 22:33:35.423] I0812 22:33:35.422196   53188 controller_utils.go:1036] Caches are synced for job controller
W0812 22:33:35.423] I0812 22:33:35.422758   53188 controller_utils.go:1036] Caches are synced for HPA controller
W0812 22:33:35.425] I0812 22:33:35.424970   49714 controller.go:606] quota admission added evaluator for: serviceaccounts
W0812 22:33:35.426] I0812 22:33:35.425912   53188 controller_utils.go:1036] Caches are synced for taint controller
... skipping 102 lines ...
I0812 22:33:39.033] +++ working dir: /go/src/k8s.io/kubernetes
I0812 22:33:39.035] +++ command: run_RESTMapper_evaluation_tests
I0812 22:33:39.050] +++ [0812 22:33:39] Creating namespace namespace-1565649219-29356
I0812 22:33:39.132] namespace/namespace-1565649219-29356 created
I0812 22:33:39.208] Context "test" modified.
I0812 22:33:39.216] +++ [0812 22:33:39] Testing RESTMapper
I0812 22:33:39.337] +++ [0812 22:33:39] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0812 22:33:39.353] +++ exit code: 0
I0812 22:33:39.476] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0812 22:33:39.477] bindings                                                                      true         Binding
I0812 22:33:39.477] componentstatuses                 cs                                          false        ComponentStatus
I0812 22:33:39.478] configmaps                        cm                                          true         ConfigMap
I0812 22:33:39.478] endpoints                         ep                                          true         Endpoints
... skipping 661 lines ...
I0812 22:33:59.887] core.sh:241: Successful get pdb/test-pdb-1 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 2
I0812 22:33:59.962] (Bpoddisruptionbudget.policy/test-pdb-2 created
I0812 22:34:00.054] core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
I0812 22:34:00.130] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0812 22:34:00.230] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0812 22:34:00.313] (Bpoddisruptionbudget.policy/test-pdb-4 created
W0812 22:34:00.414] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0812 22:34:00.415] error: setting 'all' parameter but found a non empty selector. 
W0812 22:34:00.415] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0812 22:34:00.415] I0812 22:33:59.791178   49714 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0812 22:34:00.492] error: min-available and max-unavailable cannot be both specified
I0812 22:34:00.593] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0812 22:34:00.593] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:34:00.784] (Bpod/env-test-pod created
I0812 22:34:00.971] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0812 22:34:00.972] Name:         env-test-pod
I0812 22:34:00.972] Namespace:    test-kubectl-describe-pod
... skipping 176 lines ...
I0812 22:34:15.075] (Bpod/valid-pod patched
I0812 22:34:15.174] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0812 22:34:15.255] (Bpod/valid-pod patched
I0812 22:34:15.358] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0812 22:34:15.524] (Bpod/valid-pod patched
I0812 22:34:15.630] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0812 22:34:15.823] (B+++ [0812 22:34:15] "kubectl patch with resourceVersion 497" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0812 22:34:16.062] pod "valid-pod" deleted
I0812 22:34:16.076] pod/valid-pod replaced
I0812 22:34:16.178] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0812 22:34:16.349] (BSuccessful
I0812 22:34:16.349] message:error: --grace-period must have --force specified
I0812 22:34:16.349] has:\-\-grace-period must have \-\-force specified
I0812 22:34:16.506] Successful
I0812 22:34:16.506] message:error: --timeout must have --force specified
I0812 22:34:16.507] has:\-\-timeout must have \-\-force specified
I0812 22:34:16.665] node/node-v1-test created
W0812 22:34:16.766] W0812 22:34:16.664980   53188 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0812 22:34:16.867] node/node-v1-test replaced
I0812 22:34:16.938] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0812 22:34:17.025] (Bnode "node-v1-test" deleted
I0812 22:34:17.131] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0812 22:34:17.427] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0812 22:34:18.472] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 32 lines ...
I0812 22:34:19.573] namespace/namespace-1565649259-18524 created
I0812 22:34:19.650] Context "test" modified.
W0812 22:34:19.752] Edit cancelled, no changes made.
W0812 22:34:19.752] Edit cancelled, no changes made.
W0812 22:34:19.752] Edit cancelled, no changes made.
W0812 22:34:19.752] Edit cancelled, no changes made.
W0812 22:34:19.752] error: 'name' already has a value (valid-pod), and --overwrite is false
W0812 22:34:19.753] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0812 22:34:19.853] core.sh:610: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:34:19.931] (Bpod/redis-master created
I0812 22:34:19.937] pod/valid-pod created
I0812 22:34:20.047] core.sh:614: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0812 22:34:20.150] (Bcore.sh:618: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
... skipping 75 lines ...
I0812 22:34:26.905] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0812 22:34:26.907] +++ working dir: /go/src/k8s.io/kubernetes
I0812 22:34:26.909] +++ command: run_kubectl_create_error_tests
I0812 22:34:26.925] +++ [0812 22:34:26] Creating namespace namespace-1565649266-24360
I0812 22:34:27.000] namespace/namespace-1565649266-24360 created
I0812 22:34:27.073] Context "test" modified.
I0812 22:34:27.079] +++ [0812 22:34:27] Testing kubectl create with error
W0812 22:34:27.180] Error: must specify one of -f and -k
W0812 22:34:27.180] 
W0812 22:34:27.180] Create a resource from a file or from stdin.
W0812 22:34:27.180] 
W0812 22:34:27.180]  JSON and YAML formats are accepted.
W0812 22:34:27.180] 
W0812 22:34:27.181] Examples:
... skipping 41 lines ...
W0812 22:34:27.187] 
W0812 22:34:27.187] Usage:
W0812 22:34:27.187]   kubectl create -f FILENAME [options]
W0812 22:34:27.187] 
W0812 22:34:27.187] Use "kubectl <command> --help" for more information about a given command.
W0812 22:34:27.187] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0812 22:34:27.334] +++ [0812 22:34:27] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0812 22:34:27.435] kubectl convert is DEPRECATED and will be removed in a future version.
W0812 22:34:27.436] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0812 22:34:27.536] +++ exit code: 0
I0812 22:34:27.559] Recording: run_kubectl_apply_tests
I0812 22:34:27.560] Running command: run_kubectl_apply_tests
I0812 22:34:27.583] 
... skipping 19 lines ...
W0812 22:34:29.734] I0812 22:34:29.733778   49714 client.go:354] parsed scheme: ""
W0812 22:34:29.735] I0812 22:34:29.733827   49714 client.go:354] scheme "" not registered, fallback to default scheme
W0812 22:34:29.735] I0812 22:34:29.733864   49714 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0812 22:34:29.735] I0812 22:34:29.733910   49714 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0812 22:34:29.735] I0812 22:34:29.734420   49714 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0812 22:34:29.737] I0812 22:34:29.737018   49714 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0812 22:34:29.831] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0812 22:34:29.931] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0812 22:34:29.944] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0812 22:34:29.985] +++ exit code: 0
I0812 22:34:30.022] Recording: run_kubectl_run_tests
I0812 22:34:30.022] Running command: run_kubectl_run_tests
I0812 22:34:30.047] 
... skipping 95 lines ...
I0812 22:34:32.714] Context "test" modified.
I0812 22:34:32.722] +++ [0812 22:34:32] Testing kubectl create filter
I0812 22:34:32.815] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:34:33.014] (Bpod/selector-test-pod created
I0812 22:34:33.115] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0812 22:34:33.211] (BSuccessful
I0812 22:34:33.212] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0812 22:34:33.212] has:pods "selector-test-pod-dont-apply" not found
I0812 22:34:33.301] pod "selector-test-pod" deleted
I0812 22:34:33.322] +++ exit code: 0
I0812 22:34:33.361] Recording: run_kubectl_apply_deployments_tests
I0812 22:34:33.361] Running command: run_kubectl_apply_deployments_tests
I0812 22:34:33.387] 
... skipping 31 lines ...
W0812 22:34:35.916] I0812 22:34:35.821000   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649273-32171", Name:"nginx", UID:"ceea85b8-d1be-41b2-8818-2c4950336b74", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7dbc4d9f to 3
W0812 22:34:35.916] I0812 22:34:35.828827   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649273-32171", Name:"nginx-7dbc4d9f", UID:"1653a1bf-2404-4e1a-861a-e76210edb08f", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-lw8js
W0812 22:34:35.917] I0812 22:34:35.834314   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649273-32171", Name:"nginx-7dbc4d9f", UID:"1653a1bf-2404-4e1a-861a-e76210edb08f", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-7rp75
W0812 22:34:35.917] I0812 22:34:35.838973   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649273-32171", Name:"nginx-7dbc4d9f", UID:"1653a1bf-2404-4e1a-861a-e76210edb08f", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-v6m7g
I0812 22:34:36.017] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0812 22:34:40.194] (BSuccessful
I0812 22:34:40.195] message:Error from server (Conflict): error when applying patch:
I0812 22:34:40.195] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565649273-32171\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0812 22:34:40.196] to:
I0812 22:34:40.196] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0812 22:34:40.196] Name: "nginx", Namespace: "namespace-1565649273-32171"
I0812 22:34:40.199] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565649273-32171\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-08-12T22:34:35Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]] "k:{\"type\":\"Progressing\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-12T22:34:35Z"] map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map["f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-08-12T22:34:35Z"]] "name":"nginx" "namespace":"namespace-1565649273-32171" "resourceVersion":"593" "selfLink":"/apis/apps/v1/namespaces/namespace-1565649273-32171/deployments/nginx" "uid":"ceea85b8-d1be-41b2-8818-2c4950336b74"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-08-12T22:34:35Z" "lastUpdateTime":"2019-08-12T22:34:35Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-08-12T22:34:35Z" "lastUpdateTime":"2019-08-12T22:34:35Z" "message":"ReplicaSet \"nginx-7dbc4d9f\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0812 22:34:40.199] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0812 22:34:40.199] has:Error from server (Conflict)
W0812 22:34:41.266] I0812 22:34:41.265345   53188 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1565649264-3744
I0812 22:34:45.522] deployment.apps/nginx configured
W0812 22:34:45.623] I0812 22:34:45.528921   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649273-32171", Name:"nginx", UID:"649c938b-e9df-4713-b055-75b17d7e8b20", APIVersion:"apps/v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0812 22:34:45.624] I0812 22:34:45.534937   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649273-32171", Name:"nginx-594f77b9f6", UID:"276291b1-77cf-4e37-ae75-8a9e68d3c8df", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-nbgx8
W0812 22:34:45.624] I0812 22:34:45.540141   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649273-32171", Name:"nginx-594f77b9f6", UID:"276291b1-77cf-4e37-ae75-8a9e68d3c8df", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-9nczp
W0812 22:34:45.625] I0812 22:34:45.541245   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649273-32171", Name:"nginx-594f77b9f6", UID:"276291b1-77cf-4e37-ae75-8a9e68d3c8df", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-z8xwm
I0812 22:34:45.726] Successful
I0812 22:34:45.726] message:        "name": "nginx2"
I0812 22:34:45.727]           "name": "nginx2"
I0812 22:34:45.727] has:"name": "nginx2"
W0812 22:34:49.937] E0812 22:34:49.936197   53188 replica_set.go:450] Sync "namespace-1565649273-32171/nginx-594f77b9f6" failed with Operation cannot be fulfilled on replicasets.apps "nginx-594f77b9f6": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1565649273-32171/nginx-594f77b9f6, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 276291b1-77cf-4e37-ae75-8a9e68d3c8df, UID in object meta: 
W0812 22:34:50.901] I0812 22:34:50.901096   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649273-32171", Name:"nginx", UID:"f50229d8-c111-4929-9b15-f4ceb4eb38a1", APIVersion:"apps/v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0812 22:34:50.906] I0812 22:34:50.905945   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649273-32171", Name:"nginx-594f77b9f6", UID:"f3a83f34-2fa4-40af-8d37-52dd038a983a", APIVersion:"apps/v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-ccfww
W0812 22:34:50.910] I0812 22:34:50.909781   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649273-32171", Name:"nginx-594f77b9f6", UID:"f3a83f34-2fa4-40af-8d37-52dd038a983a", APIVersion:"apps/v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-l8v9q
W0812 22:34:50.912] I0812 22:34:50.911306   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649273-32171", Name:"nginx-594f77b9f6", UID:"f3a83f34-2fa4-40af-8d37-52dd038a983a", APIVersion:"apps/v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-fknf7
I0812 22:34:51.013] Successful
I0812 22:34:51.014] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 183 lines ...
I0812 22:34:53.107] +++ [0812 22:34:53] Creating namespace namespace-1565649293-31163
I0812 22:34:53.194] namespace/namespace-1565649293-31163 created
I0812 22:34:53.268] Context "test" modified.
I0812 22:34:53.276] +++ [0812 22:34:53] Testing kubectl get
I0812 22:34:53.371] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:34:53.466] (BSuccessful
I0812 22:34:53.467] message:Error from server (NotFound): pods "abc" not found
I0812 22:34:53.467] has:pods "abc" not found
I0812 22:34:53.563] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:34:53.661] (BSuccessful
I0812 22:34:53.661] message:Error from server (NotFound): pods "abc" not found
I0812 22:34:53.662] has:pods "abc" not found
I0812 22:34:53.758] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:34:53.846] (BSuccessful
I0812 22:34:53.846] message:{
I0812 22:34:53.847]     "apiVersion": "v1",
I0812 22:34:53.847]     "items": [],
... skipping 23 lines ...
I0812 22:34:54.216] has not:No resources found
I0812 22:34:54.317] Successful
I0812 22:34:54.317] message:NAME
I0812 22:34:54.318] has not:No resources found
I0812 22:34:54.414] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:34:54.525] (BSuccessful
I0812 22:34:54.525] message:error: the server doesn't have a resource type "foobar"
I0812 22:34:54.525] has not:No resources found
I0812 22:34:54.618] Successful
I0812 22:34:54.619] message:No resources found in namespace-1565649293-31163 namespace.
I0812 22:34:54.619] has:No resources found
I0812 22:34:54.708] Successful
I0812 22:34:54.708] message:
I0812 22:34:54.708] has not:No resources found
I0812 22:34:54.800] Successful
I0812 22:34:54.800] message:No resources found in namespace-1565649293-31163 namespace.
I0812 22:34:54.801] has:No resources found
I0812 22:34:54.899] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:34:54.990] (BSuccessful
I0812 22:34:54.990] message:Error from server (NotFound): pods "abc" not found
I0812 22:34:54.991] has:pods "abc" not found
I0812 22:34:54.994] FAIL!
I0812 22:34:54.994] message:Error from server (NotFound): pods "abc" not found
I0812 22:34:54.994] has not:List
I0812 22:34:54.994] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0812 22:34:55.117] Successful
I0812 22:34:55.117] message:I0812 22:34:55.063936   63752 loader.go:375] Config loaded from file:  /tmp/tmp.H1TfYpmnae/.kube/config
I0812 22:34:55.118] I0812 22:34:55.065921   63752 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0812 22:34:55.118] I0812 22:34:55.088433   63752 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0812 22:35:00.791] Successful
I0812 22:35:00.792] message:NAME    DATA   AGE
I0812 22:35:00.792] one     0      0s
I0812 22:35:00.792] three   0      0s
I0812 22:35:00.792] two     0      0s
I0812 22:35:00.792] STATUS    REASON          MESSAGE
I0812 22:35:00.793] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0812 22:35:00.793] has not:watch is only supported on individual resources
I0812 22:35:01.888] Successful
I0812 22:35:01.888] message:STATUS    REASON          MESSAGE
I0812 22:35:01.889] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0812 22:35:01.889] has not:watch is only supported on individual resources
I0812 22:35:01.895] +++ [0812 22:35:01] Creating namespace namespace-1565649301-3774
I0812 22:35:01.981] namespace/namespace-1565649301-3774 created
I0812 22:35:02.062] Context "test" modified.
I0812 22:35:02.172] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:35:02.361] (Bpod/valid-pod created
... skipping 104 lines ...
I0812 22:35:02.468] }
I0812 22:35:02.570] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0812 22:35:02.831] (B<no value>Successful
I0812 22:35:02.832] message:valid-pod:
I0812 22:35:02.832] has:valid-pod:
I0812 22:35:02.924] Successful
I0812 22:35:02.924] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0812 22:35:02.924] 	template was:
I0812 22:35:02.925] 		{.missing}
I0812 22:35:02.925] 	object given to jsonpath engine was:
I0812 22:35:02.927] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-08-12T22:35:02Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-08-12T22:35:02Z"}}, "name":"valid-pod", "namespace":"namespace-1565649301-3774", "resourceVersion":"694", "selfLink":"/api/v1/namespaces/namespace-1565649301-3774/pods/valid-pod", "uid":"b3647ed6-edf2-4eaa-9d6c-744acd6b9447"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0812 22:35:02.927] has:missing is not found
I0812 22:35:03.014] Successful
I0812 22:35:03.015] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0812 22:35:03.015] 	template was:
I0812 22:35:03.015] 		{{.missing}}
I0812 22:35:03.015] 	raw data was:
I0812 22:35:03.017] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-08-12T22:35:02Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-08-12T22:35:02Z"}],"name":"valid-pod","namespace":"namespace-1565649301-3774","resourceVersion":"694","selfLink":"/api/v1/namespaces/namespace-1565649301-3774/pods/valid-pod","uid":"b3647ed6-edf2-4eaa-9d6c-744acd6b9447"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0812 22:35:03.017] 	object given to template engine was:
I0812 22:35:03.019] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-08-12T22:35:02Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-08-12T22:35:02Z]] name:valid-pod namespace:namespace-1565649301-3774 resourceVersion:694 selfLink:/api/v1/namespaces/namespace-1565649301-3774/pods/valid-pod uid:b3647ed6-edf2-4eaa-9d6c-744acd6b9447] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0812 22:35:03.019] has:map has no entry for key "missing"
W0812 22:35:03.120] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0812 22:35:04.113] Successful
I0812 22:35:04.113] message:NAME        READY   STATUS    RESTARTS   AGE
I0812 22:35:04.113] valid-pod   0/1     Pending   0          1s
I0812 22:35:04.113] STATUS      REASON          MESSAGE
I0812 22:35:04.114] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0812 22:35:04.114] has:STATUS
I0812 22:35:04.115] Successful
I0812 22:35:04.115] message:NAME        READY   STATUS    RESTARTS   AGE
I0812 22:35:04.116] valid-pod   0/1     Pending   0          1s
I0812 22:35:04.116] STATUS      REASON          MESSAGE
I0812 22:35:04.116] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0812 22:35:04.116] has:valid-pod
I0812 22:35:05.206] Successful
I0812 22:35:05.206] message:pod/valid-pod
I0812 22:35:05.207] has not:STATUS
I0812 22:35:05.208] Successful
I0812 22:35:05.209] message:pod/valid-pod
... skipping 144 lines ...
I0812 22:35:06.313] status:
I0812 22:35:06.313]   phase: Pending
I0812 22:35:06.313]   qosClass: Guaranteed
I0812 22:35:06.313] ---
I0812 22:35:06.313] has:name: valid-pod
I0812 22:35:06.391] Successful
I0812 22:35:06.392] message:Error from server (NotFound): pods "invalid-pod" not found
I0812 22:35:06.392] has:"invalid-pod" not found
I0812 22:35:06.483] pod "valid-pod" deleted
I0812 22:35:06.583] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:35:06.765] (Bpod/redis-master created
I0812 22:35:06.771] pod/valid-pod created
I0812 22:35:06.872] Successful
... skipping 35 lines ...
I0812 22:35:08.134] +++ command: run_kubectl_exec_pod_tests
I0812 22:35:08.148] +++ [0812 22:35:08] Creating namespace namespace-1565649308-14617
I0812 22:35:08.234] namespace/namespace-1565649308-14617 created
I0812 22:35:08.323] Context "test" modified.
I0812 22:35:08.330] +++ [0812 22:35:08] Testing kubectl exec POD COMMAND
I0812 22:35:08.427] Successful
I0812 22:35:08.427] message:Error from server (NotFound): pods "abc" not found
I0812 22:35:08.427] has:pods "abc" not found
I0812 22:35:08.606] pod/test-pod created
I0812 22:35:08.712] Successful
I0812 22:35:08.712] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0812 22:35:08.712] has not:pods "test-pod" not found
I0812 22:35:08.714] Successful
I0812 22:35:08.714] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0812 22:35:08.714] has not:pod or type/name must be specified
I0812 22:35:08.795] pod "test-pod" deleted
I0812 22:35:08.817] +++ exit code: 0
I0812 22:35:08.858] Recording: run_kubectl_exec_resource_name_tests
I0812 22:35:08.858] Running command: run_kubectl_exec_resource_name_tests
I0812 22:35:08.883] 
... skipping 2 lines ...
I0812 22:35:08.891] +++ command: run_kubectl_exec_resource_name_tests
I0812 22:35:08.906] +++ [0812 22:35:08] Creating namespace namespace-1565649308-28165
I0812 22:35:08.989] namespace/namespace-1565649308-28165 created
I0812 22:35:09.068] Context "test" modified.
I0812 22:35:09.075] +++ [0812 22:35:09] Testing kubectl exec TYPE/NAME COMMAND
I0812 22:35:09.188] Successful
I0812 22:35:09.188] message:error: the server doesn't have a resource type "foo"
I0812 22:35:09.188] has:error:
I0812 22:35:09.287] Successful
I0812 22:35:09.287] message:Error from server (NotFound): deployments.apps "bar" not found
I0812 22:35:09.287] has:"bar" not found
I0812 22:35:09.472] pod/test-pod created
I0812 22:35:09.655] replicaset.apps/frontend created
W0812 22:35:09.756] I0812 22:35:09.659679   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649308-28165", Name:"frontend", UID:"84990e39-39e3-4054-baa7-72aee18ecb85", APIVersion:"apps/v1", ResourceVersion:"748", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-j7pcc
W0812 22:35:09.757] I0812 22:35:09.664085   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649308-28165", Name:"frontend", UID:"84990e39-39e3-4054-baa7-72aee18ecb85", APIVersion:"apps/v1", ResourceVersion:"748", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nx5ld
W0812 22:35:09.758] I0812 22:35:09.664798   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649308-28165", Name:"frontend", UID:"84990e39-39e3-4054-baa7-72aee18ecb85", APIVersion:"apps/v1", ResourceVersion:"748", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-f6ffz
I0812 22:35:09.859] configmap/test-set-env-config created
I0812 22:35:09.949] Successful
I0812 22:35:09.950] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0812 22:35:09.950] has:not implemented
I0812 22:35:10.049] Successful
I0812 22:35:10.050] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0812 22:35:10.050] has not:not found
I0812 22:35:10.052] Successful
I0812 22:35:10.052] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0812 22:35:10.052] has not:pod or type/name must be specified
I0812 22:35:10.173] Successful
I0812 22:35:10.174] message:Error from server (BadRequest): pod frontend-f6ffz does not have a host assigned
I0812 22:35:10.174] has not:not found
I0812 22:35:10.175] Successful
I0812 22:35:10.176] message:Error from server (BadRequest): pod frontend-f6ffz does not have a host assigned
I0812 22:35:10.176] has not:pod or type/name must be specified
I0812 22:35:10.262] pod "test-pod" deleted
I0812 22:35:10.355] replicaset.apps "frontend" deleted
I0812 22:35:10.446] configmap "test-set-env-config" deleted
I0812 22:35:10.467] +++ exit code: 0
I0812 22:35:10.511] Recording: run_create_secret_tests
I0812 22:35:10.511] Running command: run_create_secret_tests
I0812 22:35:10.536] 
I0812 22:35:10.538] +++ Running case: test-cmd.run_create_secret_tests 
I0812 22:35:10.542] +++ working dir: /go/src/k8s.io/kubernetes
I0812 22:35:10.544] +++ command: run_create_secret_tests
I0812 22:35:10.647] Successful
I0812 22:35:10.648] message:Error from server (NotFound): secrets "mysecret" not found
I0812 22:35:10.648] has:secrets "mysecret" not found
I0812 22:35:10.822] Successful
I0812 22:35:10.822] message:Error from server (NotFound): secrets "mysecret" not found
I0812 22:35:10.823] has:secrets "mysecret" not found
I0812 22:35:10.824] Successful
I0812 22:35:10.825] message:user-specified
I0812 22:35:10.825] has:user-specified
I0812 22:35:10.905] Successful
I0812 22:35:10.989] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"a7a89ac1-f5a2-4957-817a-3903ae296a5a","resourceVersion":"769","creationTimestamp":"2019-08-12T22:35:10Z"}}
... skipping 2 lines ...
I0812 22:35:11.178] has:uid
I0812 22:35:11.253] Successful
I0812 22:35:11.253] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"a7a89ac1-f5a2-4957-817a-3903ae296a5a","resourceVersion":"770","creationTimestamp":"2019-08-12T22:35:10Z","managedFields":[{"manager":"kubectl","operation":"Update","apiVersion":"v1","time":"2019-08-12T22:35:11Z","fields":{"f:data":{"f:key1":{},".":{}}}}]},"data":{"key1":"config1"}}
I0812 22:35:11.253] has:config1
I0812 22:35:11.331] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"a7a89ac1-f5a2-4957-817a-3903ae296a5a"}}
I0812 22:35:11.434] Successful
I0812 22:35:11.435] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0812 22:35:11.435] has:configmaps "tester-update-cm" not found
I0812 22:35:11.450] +++ exit code: 0
I0812 22:35:11.492] Recording: run_kubectl_create_kustomization_directory_tests
I0812 22:35:11.492] Running command: run_kubectl_create_kustomization_directory_tests
I0812 22:35:11.517] 
I0812 22:35:11.520] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 158 lines ...
I0812 22:35:14.424] valid-pod   0/1     Pending   0          0s
I0812 22:35:14.424] has:valid-pod
I0812 22:35:15.522] Successful
I0812 22:35:15.522] message:NAME        READY   STATUS    RESTARTS   AGE
I0812 22:35:15.523] valid-pod   0/1     Pending   0          0s
I0812 22:35:15.523] STATUS      REASON          MESSAGE
I0812 22:35:15.523] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0812 22:35:15.523] has:Timeout exceeded while reading body
I0812 22:35:15.610] Successful
I0812 22:35:15.611] message:NAME        READY   STATUS    RESTARTS   AGE
I0812 22:35:15.611] valid-pod   0/1     Pending   0          1s
I0812 22:35:15.611] has:valid-pod
I0812 22:35:15.692] Successful
I0812 22:35:15.693] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0812 22:35:15.693] has:Invalid timeout value
I0812 22:35:15.795] pod "valid-pod" deleted
I0812 22:35:15.816] +++ exit code: 0
I0812 22:35:15.855] Recording: run_crd_tests
I0812 22:35:15.856] Running command: run_crd_tests
I0812 22:35:15.882] 
... skipping 245 lines ...
I0812 22:35:20.886] foo.company.com/test patched
I0812 22:35:20.989] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0812 22:35:21.086] (Bfoo.company.com/test patched
I0812 22:35:21.189] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0812 22:35:21.286] (Bfoo.company.com/test patched
I0812 22:35:21.389] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0812 22:35:21.560] (B+++ [0812 22:35:21] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0812 22:35:21.633] {
I0812 22:35:21.633]     "apiVersion": "company.com/v1",
I0812 22:35:21.633]     "kind": "Foo",
I0812 22:35:21.634]     "metadata": {
I0812 22:35:21.634]         "annotations": {
I0812 22:35:21.634]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 346 lines ...
I0812 22:35:44.458] (Bnamespace/non-native-resources created
I0812 22:35:44.642] bar.company.com/test created
I0812 22:35:44.751] crd.sh:455: Successful get bars {{len .items}}: 1
I0812 22:35:44.842] (Bnamespace "non-native-resources" deleted
I0812 22:35:50.125] crd.sh:458: Successful get bars {{len .items}}: 0
I0812 22:35:50.309] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0812 22:35:50.410] Error from server (NotFound): namespaces "non-native-resources" not found
I0812 22:35:50.510] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0812 22:35:50.538] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0812 22:35:50.642] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0812 22:35:50.674] +++ exit code: 0
I0812 22:35:50.712] Recording: run_cmd_with_img_tests
I0812 22:35:50.713] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0812 22:35:51.017] I0812 22:35:51.016426   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649350-29700", Name:"test1-9797f89d8", UID:"e59ebf12-5a78-404a-aefc-37f346873a31", APIVersion:"apps/v1", ResourceVersion:"924", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-9797f89d8-rhwcr
I0812 22:35:51.117] Successful
I0812 22:35:51.118] message:deployment.apps/test1 created
I0812 22:35:51.118] has:deployment.apps/test1 created
I0812 22:35:51.132] deployment.apps "test1" deleted
I0812 22:35:51.217] Successful
I0812 22:35:51.217] message:error: Invalid image name "InvalidImageName": invalid reference format
I0812 22:35:51.217] has:error: Invalid image name "InvalidImageName": invalid reference format
I0812 22:35:51.231] +++ exit code: 0
I0812 22:35:51.279] +++ [0812 22:35:51] Testing recursive resources
I0812 22:35:51.286] +++ [0812 22:35:51] Creating namespace namespace-1565649351-8322
I0812 22:35:51.368] namespace/namespace-1565649351-8322 created
I0812 22:35:51.441] Context "test" modified.
W0812 22:35:51.542] W0812 22:35:51.320592   49714 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0812 22:35:51.543] E0812 22:35:51.322433   53188 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:51.543] W0812 22:35:51.421163   49714 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0812 22:35:51.543] E0812 22:35:51.423298   53188 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:51.552] W0812 22:35:51.551453   49714 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0812 22:35:51.553] E0812 22:35:51.553067   53188 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:35:51.654] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:35:51.895] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:51.898] (BSuccessful
I0812 22:35:51.899] message:pod/busybox0 created
I0812 22:35:51.899] pod/busybox1 created
I0812 22:35:51.899] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0812 22:35:51.900] has:error validating data: kind not set
I0812 22:35:52.000] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:52.203] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0812 22:35:52.205] (BSuccessful
I0812 22:35:52.206] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 22:35:52.206] has:Object 'Kind' is missing
I0812 22:35:52.300] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:52.584] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0812 22:35:52.586] (BSuccessful
I0812 22:35:52.586] message:pod/busybox0 replaced
I0812 22:35:52.586] pod/busybox1 replaced
I0812 22:35:52.587] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0812 22:35:52.587] has:error validating data: kind not set
I0812 22:35:52.688] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:52.799] (BSuccessful
I0812 22:35:52.800] message:Name:         busybox0
I0812 22:35:52.800] Namespace:    namespace-1565649351-8322
I0812 22:35:52.800] Priority:     0
I0812 22:35:52.800] Node:         <none>
... skipping 159 lines ...
I0812 22:35:52.815] has:Object 'Kind' is missing
I0812 22:35:52.913] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:53.119] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0812 22:35:53.123] (BSuccessful
I0812 22:35:53.123] message:pod/busybox0 annotated
I0812 22:35:53.123] pod/busybox1 annotated
I0812 22:35:53.124] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 22:35:53.124] has:Object 'Kind' is missing
I0812 22:35:53.230] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:53.543] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0812 22:35:53.546] (BSuccessful
I0812 22:35:53.546] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0812 22:35:53.547] pod/busybox0 configured
I0812 22:35:53.547] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0812 22:35:53.547] pod/busybox1 configured
I0812 22:35:53.547] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0812 22:35:53.548] has:error validating data: kind not set
I0812 22:35:53.640] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:35:53.806] (Bdeployment.apps/nginx created
W0812 22:35:53.907] W0812 22:35:51.655892   49714 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0812 22:35:53.908] E0812 22:35:51.658081   53188 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:53.908] E0812 22:35:52.323958   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:53.908] E0812 22:35:52.424941   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:53.909] E0812 22:35:52.554580   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:53.909] E0812 22:35:52.660308   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:53.909] E0812 22:35:53.325842   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:53.910] E0812 22:35:53.426372   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:53.910] E0812 22:35:53.556521   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:53.910] E0812 22:35:53.662574   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:53.911] I0812 22:35:53.812200   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649351-8322", Name:"nginx", UID:"808729a9-b30b-4214-8787-7094013cda7b", APIVersion:"apps/v1", ResourceVersion:"949", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0812 22:35:53.911] I0812 22:35:53.817421   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649351-8322", Name:"nginx-bbbbb95b5", UID:"ba404982-a2f3-46a5-afa0-2c51d24fe36f", APIVersion:"apps/v1", ResourceVersion:"950", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-f9bk2
W0812 22:35:53.911] I0812 22:35:53.822082   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649351-8322", Name:"nginx-bbbbb95b5", UID:"ba404982-a2f3-46a5-afa0-2c51d24fe36f", APIVersion:"apps/v1", ResourceVersion:"950", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-tb5n8
W0812 22:35:53.912] I0812 22:35:53.823057   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649351-8322", Name:"nginx-bbbbb95b5", UID:"ba404982-a2f3-46a5-afa0-2c51d24fe36f", APIVersion:"apps/v1", ResourceVersion:"950", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-2tc8m
I0812 22:35:54.012] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0812 22:35:54.019] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 41 lines ...
I0812 22:35:54.216]       terminationGracePeriodSeconds: 30
I0812 22:35:54.216] status: {}
I0812 22:35:54.216] has:extensions/v1beta1
I0812 22:35:54.307] deployment.apps "nginx" deleted
W0812 22:35:54.408] kubectl convert is DEPRECATED and will be removed in a future version.
W0812 22:35:54.409] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0812 22:35:54.409] E0812 22:35:54.327697   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:54.428] E0812 22:35:54.428023   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:35:54.529] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:54.620] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:54.622] (BSuccessful
I0812 22:35:54.622] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0812 22:35:54.623] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0812 22:35:54.623] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 22:35:54.623] has:Object 'Kind' is missing
I0812 22:35:54.725] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:54.819] (BSuccessful
I0812 22:35:54.820] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 22:35:54.820] has:busybox0:busybox1:
I0812 22:35:54.821] Successful
I0812 22:35:54.822] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 22:35:54.822] has:Object 'Kind' is missing
I0812 22:35:54.921] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:55.026] (Bpod/busybox0 labeled
I0812 22:35:55.027] pod/busybox1 labeled
I0812 22:35:55.028] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 22:35:55.130] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0812 22:35:55.132] (BSuccessful
I0812 22:35:55.132] message:pod/busybox0 labeled
I0812 22:35:55.132] pod/busybox1 labeled
I0812 22:35:55.133] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 22:35:55.133] has:Object 'Kind' is missing
I0812 22:35:55.233] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:55.334] (Bpod/busybox0 patched
I0812 22:35:55.335] pod/busybox1 patched
I0812 22:35:55.335] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 22:35:55.434] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0812 22:35:55.436] (BSuccessful
I0812 22:35:55.437] message:pod/busybox0 patched
I0812 22:35:55.438] pod/busybox1 patched
I0812 22:35:55.438] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 22:35:55.438] has:Object 'Kind' is missing
I0812 22:35:55.543] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:55.746] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:35:55.749] (BSuccessful
I0812 22:35:55.750] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0812 22:35:55.751] pod "busybox0" force deleted
I0812 22:35:55.751] pod "busybox1" force deleted
I0812 22:35:55.751] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 22:35:55.751] has:Object 'Kind' is missing
I0812 22:35:55.849] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:35:56.024] (Breplicationcontroller/busybox0 created
I0812 22:35:56.033] replicationcontroller/busybox1 created
W0812 22:35:56.134] E0812 22:35:54.558605   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:56.134] E0812 22:35:54.664588   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:56.135] E0812 22:35:55.329057   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:56.135] E0812 22:35:55.429911   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:56.135] E0812 22:35:55.560269   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:56.136] E0812 22:35:55.666469   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:56.136] I0812 22:35:56.030519   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649351-8322", Name:"busybox0", UID:"7a4798a1-68f0-4dd7-aa35-e865beb2b59d", APIVersion:"v1", ResourceVersion:"980", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-q9448
W0812 22:35:56.136] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0812 22:35:56.137] I0812 22:35:56.037242   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649351-8322", Name:"busybox1", UID:"b687c08a-498b-4065-8ab6-57a7ac5db978", APIVersion:"v1", ResourceVersion:"982", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-xwjdc
I0812 22:35:56.237] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:56.253] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:56.352] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0812 22:35:56.454] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0812 22:35:56.662] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0812 22:35:56.759] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0812 22:35:56.762] (BSuccessful
I0812 22:35:56.763] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0812 22:35:56.763] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0812 22:35:56.763] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 22:35:56.763] has:Object 'Kind' is missing
I0812 22:35:56.849] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0812 22:35:56.940] horizontalpodautoscaler.autoscaling "busybox1" deleted
W0812 22:35:57.041] E0812 22:35:56.330723   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:57.042] E0812 22:35:56.431582   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:57.042] E0812 22:35:56.562085   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:57.043] E0812 22:35:56.668257   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:35:57.143] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:57.151] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0812 22:35:57.254] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0812 22:35:57.491] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0812 22:35:57.589] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0812 22:35:57.591] (BSuccessful
I0812 22:35:57.591] message:service/busybox0 exposed
I0812 22:35:57.591] service/busybox1 exposed
I0812 22:35:57.592] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 22:35:57.592] has:Object 'Kind' is missing
I0812 22:35:57.695] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:57.794] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0812 22:35:57.891] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0812 22:35:58.103] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0812 22:35:58.205] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0812 22:35:58.207] (BSuccessful
I0812 22:35:58.208] message:replicationcontroller/busybox0 scaled
I0812 22:35:58.208] replicationcontroller/busybox1 scaled
I0812 22:35:58.208] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 22:35:58.209] has:Object 'Kind' is missing
I0812 22:35:58.309] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:35:58.513] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:35:58.516] (BSuccessful
I0812 22:35:58.517] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0812 22:35:58.517] replicationcontroller "busybox0" force deleted
I0812 22:35:58.517] replicationcontroller "busybox1" force deleted
I0812 22:35:58.517] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 22:35:58.517] has:Object 'Kind' is missing
I0812 22:35:58.612] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:35:58.795] (Bdeployment.apps/nginx1-deployment created
I0812 22:35:58.801] deployment.apps/nginx0-deployment created
W0812 22:35:58.902] E0812 22:35:57.332595   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:58.903] E0812 22:35:57.433644   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:58.903] E0812 22:35:57.563868   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:58.903] E0812 22:35:57.670089   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:58.904] I0812 22:35:57.994559   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649351-8322", Name:"busybox0", UID:"7a4798a1-68f0-4dd7-aa35-e865beb2b59d", APIVersion:"v1", ResourceVersion:"1002", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-kc8rq
W0812 22:35:58.904] I0812 22:35:58.005455   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649351-8322", Name:"busybox1", UID:"b687c08a-498b-4065-8ab6-57a7ac5db978", APIVersion:"v1", ResourceVersion:"1006", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-jhljq
W0812 22:35:58.905] E0812 22:35:58.334178   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:58.905] E0812 22:35:58.435604   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:58.905] E0812 22:35:58.565783   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:58.905] E0812 22:35:58.672158   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:58.906] I0812 22:35:58.800558   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649351-8322", Name:"nginx1-deployment", UID:"20c24e2d-5e75-48d5-b8df-0ec14ab5e2db", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-84f7f49fb7 to 2
W0812 22:35:58.906] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0812 22:35:58.906] I0812 22:35:58.806468   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649351-8322", Name:"nginx0-deployment", UID:"9eac3ee2-5ceb-4b37-a5b9-084af0bba74c", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57475bf54d to 2
W0812 22:35:58.907] I0812 22:35:58.806526   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649351-8322", Name:"nginx1-deployment-84f7f49fb7", UID:"a293d0bd-58d2-4887-abc5-f487dd40589d", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-pf845
W0812 22:35:58.907] I0812 22:35:58.810765   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649351-8322", Name:"nginx0-deployment-57475bf54d", UID:"958ea0de-ee33-4d6f-9e6b-76d03a200fe4", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-7hb2k
W0812 22:35:58.907] I0812 22:35:58.812567   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649351-8322", Name:"nginx1-deployment-84f7f49fb7", UID:"a293d0bd-58d2-4887-abc5-f487dd40589d", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-8npjx
W0812 22:35:58.908] I0812 22:35:58.814769   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649351-8322", Name:"nginx0-deployment-57475bf54d", UID:"958ea0de-ee33-4d6f-9e6b-76d03a200fe4", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-fw6hw
I0812 22:35:59.008] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0812 22:35:59.019] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0812 22:35:59.249] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0812 22:35:59.252] (BSuccessful
I0812 22:35:59.252] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0812 22:35:59.253] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0812 22:35:59.253] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0812 22:35:59.254] has:Object 'Kind' is missing
W0812 22:35:59.354] E0812 22:35:59.336580   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:59.438] E0812 22:35:59.437174   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:35:59.539] deployment.apps/nginx1-deployment paused
I0812 22:35:59.540] deployment.apps/nginx0-deployment paused
I0812 22:35:59.540] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0812 22:35:59.541] (BSuccessful
I0812 22:35:59.542] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0812 22:35:59.542] has:Object 'Kind' is missing
I0812 22:35:59.633] deployment.apps/nginx1-deployment resumed
I0812 22:35:59.640] deployment.apps/nginx0-deployment resumed
W0812 22:35:59.741] E0812 22:35:59.567296   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:35:59.741] E0812 22:35:59.673959   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:35:59.842] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0812 22:35:59.843] (BSuccessful
I0812 22:35:59.843] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0812 22:35:59.843] has:Object 'Kind' is missing
I0812 22:35:59.888] Successful
I0812 22:35:59.888] message:deployment.apps/nginx1-deployment 
I0812 22:35:59.888] REVISION  CHANGE-CAUSE
I0812 22:35:59.888] 1         <none>
I0812 22:35:59.888] 
I0812 22:35:59.889] deployment.apps/nginx0-deployment 
I0812 22:35:59.889] REVISION  CHANGE-CAUSE
I0812 22:35:59.889] 1         <none>
I0812 22:35:59.889] 
I0812 22:35:59.889] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0812 22:35:59.889] has:nginx0-deployment
I0812 22:35:59.890] Successful
I0812 22:35:59.891] message:deployment.apps/nginx1-deployment 
I0812 22:35:59.891] REVISION  CHANGE-CAUSE
I0812 22:35:59.891] 1         <none>
I0812 22:35:59.891] 
I0812 22:35:59.891] deployment.apps/nginx0-deployment 
I0812 22:35:59.891] REVISION  CHANGE-CAUSE
I0812 22:35:59.891] 1         <none>
I0812 22:35:59.891] 
I0812 22:35:59.892] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0812 22:35:59.892] has:nginx1-deployment
I0812 22:35:59.893] Successful
I0812 22:35:59.894] message:deployment.apps/nginx1-deployment 
I0812 22:35:59.894] REVISION  CHANGE-CAUSE
I0812 22:35:59.894] 1         <none>
I0812 22:35:59.894] 
I0812 22:35:59.894] deployment.apps/nginx0-deployment 
I0812 22:35:59.894] REVISION  CHANGE-CAUSE
I0812 22:35:59.894] 1         <none>
I0812 22:35:59.894] 
I0812 22:35:59.895] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0812 22:35:59.895] has:Object 'Kind' is missing
I0812 22:35:59.983] deployment.apps "nginx1-deployment" force deleted
I0812 22:35:59.991] deployment.apps "nginx0-deployment" force deleted
W0812 22:36:00.092] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0812 22:36:00.093] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0812 22:36:00.339] E0812 22:36:00.338556   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:00.440] E0812 22:36:00.439527   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:00.570] E0812 22:36:00.569370   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:00.676] E0812 22:36:00.675486   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:01.101] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:36:01.289] (Breplicationcontroller/busybox0 created
I0812 22:36:01.295] replicationcontroller/busybox1 created
W0812 22:36:01.396] I0812 22:36:01.294515   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649351-8322", Name:"busybox0", UID:"c60743de-10ca-4140-9ab6-347dccbc6f0f", APIVersion:"v1", ResourceVersion:"1071", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-drhhs
W0812 22:36:01.397] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0812 22:36:01.397] I0812 22:36:01.299446   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649351-8322", Name:"busybox1", UID:"cd3581ba-53ae-4e7f-9c97-35c30e100dba", APIVersion:"v1", ResourceVersion:"1072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-n27x7
W0812 22:36:01.397] E0812 22:36:01.340401   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:01.442] E0812 22:36:01.441213   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:01.542] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 22:36:01.543] (BSuccessful
I0812 22:36:01.544] message:no rollbacker has been implemented for "ReplicationController"
I0812 22:36:01.544] no rollbacker has been implemented for "ReplicationController"
I0812 22:36:01.544] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 22:36:01.545] has:no rollbacker has been implemented for "ReplicationController"
I0812 22:36:01.545] Successful
I0812 22:36:01.545] message:no rollbacker has been implemented for "ReplicationController"
I0812 22:36:01.545] no rollbacker has been implemented for "ReplicationController"
I0812 22:36:01.546] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 22:36:01.546] has:Object 'Kind' is missing
I0812 22:36:01.640] Successful
I0812 22:36:01.641] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 22:36:01.641] error: replicationcontrollers "busybox0" pausing is not supported
I0812 22:36:01.641] error: replicationcontrollers "busybox1" pausing is not supported
I0812 22:36:01.641] has:Object 'Kind' is missing
I0812 22:36:01.643] Successful
I0812 22:36:01.643] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 22:36:01.643] error: replicationcontrollers "busybox0" pausing is not supported
I0812 22:36:01.644] error: replicationcontrollers "busybox1" pausing is not supported
I0812 22:36:01.644] has:replicationcontrollers "busybox0" pausing is not supported
I0812 22:36:01.645] Successful
I0812 22:36:01.646] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 22:36:01.646] error: replicationcontrollers "busybox0" pausing is not supported
I0812 22:36:01.647] error: replicationcontrollers "busybox1" pausing is not supported
I0812 22:36:01.647] has:replicationcontrollers "busybox1" pausing is not supported
W0812 22:36:01.747] E0812 22:36:01.571223   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:01.748] E0812 22:36:01.677092   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:01.844] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0812 22:36:01.862] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 22:36:01.963] Successful
I0812 22:36:01.964] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 22:36:01.964] error: replicationcontrollers "busybox0" resuming is not supported
I0812 22:36:01.964] error: replicationcontrollers "busybox1" resuming is not supported
I0812 22:36:01.964] has:Object 'Kind' is missing
I0812 22:36:01.964] Successful
I0812 22:36:01.965] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 22:36:01.965] error: replicationcontrollers "busybox0" resuming is not supported
I0812 22:36:01.965] error: replicationcontrollers "busybox1" resuming is not supported
I0812 22:36:01.965] has:replicationcontrollers "busybox0" resuming is not supported
I0812 22:36:01.965] Successful
I0812 22:36:01.966] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 22:36:01.966] error: replicationcontrollers "busybox0" resuming is not supported
I0812 22:36:01.966] error: replicationcontrollers "busybox1" resuming is not supported
I0812 22:36:01.966] has:replicationcontrollers "busybox0" resuming is not supported
I0812 22:36:01.966] replicationcontroller "busybox0" force deleted
I0812 22:36:01.966] replicationcontroller "busybox1" force deleted
W0812 22:36:02.343] E0812 22:36:02.342759   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:02.443] E0812 22:36:02.442962   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:02.573] E0812 22:36:02.573165   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:02.679] E0812 22:36:02.678898   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:02.870] Recording: run_namespace_tests
I0812 22:36:02.870] Running command: run_namespace_tests
I0812 22:36:02.898] 
I0812 22:36:02.900] +++ Running case: test-cmd.run_namespace_tests 
I0812 22:36:02.903] +++ working dir: /go/src/k8s.io/kubernetes
I0812 22:36:02.905] +++ command: run_namespace_tests
I0812 22:36:02.917] +++ [0812 22:36:02] Testing kubectl(v1:namespaces)
I0812 22:36:02.999] namespace/my-namespace created
I0812 22:36:03.101] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0812 22:36:03.189] (Bnamespace "my-namespace" deleted
W0812 22:36:03.345] E0812 22:36:03.344551   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:03.445] E0812 22:36:03.444845   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:03.576] E0812 22:36:03.575213   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:03.682] E0812 22:36:03.681376   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:04.347] E0812 22:36:04.346389   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:04.447] E0812 22:36:04.446561   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:04.577] E0812 22:36:04.576987   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:04.684] E0812 22:36:04.683106   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:05.349] E0812 22:36:05.348521   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:05.449] E0812 22:36:05.448870   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:05.579] E0812 22:36:05.578966   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:05.685] E0812 22:36:05.684812   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:06.351] E0812 22:36:06.350926   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:06.453] E0812 22:36:06.452557   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:06.582] E0812 22:36:06.582120   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:06.687] E0812 22:36:06.686688   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:07.353] E0812 22:36:07.353053   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:07.456] E0812 22:36:07.455143   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:07.534] I0812 22:36:07.533481   53188 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0812 22:36:07.584] E0812 22:36:07.584042   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:07.635] I0812 22:36:07.634351   53188 controller_utils.go:1036] Caches are synced for resource quota controller
W0812 22:36:07.689] E0812 22:36:07.688491   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:07.950] I0812 22:36:07.949667   53188 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0812 22:36:08.051] I0812 22:36:08.050729   53188 controller_utils.go:1036] Caches are synced for garbage collector controller
I0812 22:36:08.338] namespace/my-namespace condition met
I0812 22:36:08.435] Successful
I0812 22:36:08.436] message:Error from server (NotFound): namespaces "my-namespace" not found
I0812 22:36:08.436] has: not found
I0812 22:36:08.520] namespace/my-namespace created
I0812 22:36:08.627] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0812 22:36:08.935] (BSuccessful
I0812 22:36:08.935] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0812 22:36:08.935] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0812 22:36:08.938] namespace "namespace-1565649312-4385" deleted
I0812 22:36:08.938] namespace "namespace-1565649313-30255" deleted
I0812 22:36:08.939] namespace "namespace-1565649315-30985" deleted
I0812 22:36:08.939] namespace "namespace-1565649317-19361" deleted
I0812 22:36:08.939] namespace "namespace-1565649350-29700" deleted
I0812 22:36:08.939] namespace "namespace-1565649351-8322" deleted
I0812 22:36:08.939] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0812 22:36:08.939] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0812 22:36:08.939] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0812 22:36:08.939] has:warning: deleting cluster-scoped resources
I0812 22:36:08.939] Successful
I0812 22:36:08.940] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0812 22:36:08.940] namespace "kube-node-lease" deleted
I0812 22:36:08.940] namespace "my-namespace" deleted
I0812 22:36:08.940] namespace "namespace-1565649216-3785" deleted
... skipping 27 lines ...
I0812 22:36:08.943] namespace "namespace-1565649312-4385" deleted
I0812 22:36:08.943] namespace "namespace-1565649313-30255" deleted
I0812 22:36:08.943] namespace "namespace-1565649315-30985" deleted
I0812 22:36:08.943] namespace "namespace-1565649317-19361" deleted
I0812 22:36:08.943] namespace "namespace-1565649350-29700" deleted
I0812 22:36:08.943] namespace "namespace-1565649351-8322" deleted
I0812 22:36:08.943] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0812 22:36:08.943] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0812 22:36:08.944] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0812 22:36:08.944] has:namespace "my-namespace" deleted
W0812 22:36:09.044] E0812 22:36:08.355146   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:09.045] E0812 22:36:08.456992   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:09.045] E0812 22:36:08.586033   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:09.045] E0812 22:36:08.691052   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:09.146] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0812 22:36:09.146] (Bnamespace/other created
I0812 22:36:09.253] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0812 22:36:09.369] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:36:09.541] (Bpod/valid-pod created
W0812 22:36:09.642] E0812 22:36:09.356603   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:09.643] E0812 22:36:09.458448   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:09.643] E0812 22:36:09.588085   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:09.694] E0812 22:36:09.693507   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:09.794] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0812 22:36:09.795] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0812 22:36:09.862] (BSuccessful
I0812 22:36:09.863] message:error: a resource cannot be retrieved by name across all namespaces
I0812 22:36:09.863] has:a resource cannot be retrieved by name across all namespaces
I0812 22:36:09.964] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0812 22:36:10.054] (Bpod "valid-pod" force deleted
W0812 22:36:10.155] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0812 22:36:10.255] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:36:10.256] (Bnamespace "other" deleted
W0812 22:36:10.359] E0812 22:36:10.358493   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:10.461] E0812 22:36:10.460435   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:10.591] E0812 22:36:10.590164   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:10.696] E0812 22:36:10.695397   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:11.361] E0812 22:36:11.360609   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:11.464] E0812 22:36:11.463133   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:11.544] I0812 22:36:11.543191   53188 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1565649351-8322
W0812 22:36:11.553] I0812 22:36:11.552782   53188 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1565649351-8322
W0812 22:36:11.593] E0812 22:36:11.592384   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:11.698] E0812 22:36:11.697363   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:12.363] E0812 22:36:12.362778   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:12.465] E0812 22:36:12.464845   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:12.595] E0812 22:36:12.594259   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:12.700] E0812 22:36:12.699312   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:13.365] E0812 22:36:13.364766   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:13.467] E0812 22:36:13.466433   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:13.596] E0812 22:36:13.596027   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:13.702] E0812 22:36:13.701370   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:14.366] E0812 22:36:14.365779   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:14.468] E0812 22:36:14.467882   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:14.598] E0812 22:36:14.597189   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:14.704] E0812 22:36:14.703999   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:15.367] E0812 22:36:15.366847   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:15.468] +++ exit code: 0
I0812 22:36:15.484] Recording: run_secrets_test
I0812 22:36:15.485] Running command: run_secrets_test
I0812 22:36:15.511] 
I0812 22:36:15.513] +++ Running case: test-cmd.run_secrets_test 
I0812 22:36:15.517] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 57 lines ...
I0812 22:36:17.433] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0812 22:36:17.509] (Bsecret "test-secret" deleted
I0812 22:36:17.592] secret/test-secret created
I0812 22:36:17.684] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0812 22:36:17.770] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0812 22:36:17.845] (Bsecret "test-secret" deleted
W0812 22:36:17.947] E0812 22:36:15.469516   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:17.947] E0812 22:36:15.599126   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:17.948] E0812 22:36:15.705743   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:17.948] I0812 22:36:15.782166   70166 loader.go:375] Config loaded from file:  /tmp/tmp.H1TfYpmnae/.kube/config
W0812 22:36:17.948] E0812 22:36:16.368355   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:17.949] E0812 22:36:16.471109   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:17.949] E0812 22:36:16.600761   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:17.950] E0812 22:36:16.707221   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:17.950] E0812 22:36:17.369999   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:17.950] E0812 22:36:17.472522   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:17.951] E0812 22:36:17.602081   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:17.951] E0812 22:36:17.708915   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:18.052] secret/secret-string-data created
I0812 22:36:18.110] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0812 22:36:18.199] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0812 22:36:18.284] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0812 22:36:18.357] (Bsecret "secret-string-data" deleted
I0812 22:36:18.455] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:36:18.611] (Bsecret "test-secret" deleted
I0812 22:36:18.692] namespace "test-secrets" deleted
W0812 22:36:18.793] E0812 22:36:18.371475   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:18.793] E0812 22:36:18.474248   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:18.793] E0812 22:36:18.603830   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:18.794] E0812 22:36:18.710529   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:19.373] E0812 22:36:19.372872   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:19.476] E0812 22:36:19.475967   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:19.606] E0812 22:36:19.605731   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:19.712] E0812 22:36:19.712219   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:20.375] E0812 22:36:20.374366   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:20.478] E0812 22:36:20.477670   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:20.608] E0812 22:36:20.607407   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:20.714] E0812 22:36:20.713933   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:21.377] E0812 22:36:21.376930   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:21.479] E0812 22:36:21.479126   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:21.609] E0812 22:36:21.608932   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:21.716] E0812 22:36:21.715389   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:22.379] E0812 22:36:22.378301   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:22.481] E0812 22:36:22.480806   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:22.611] E0812 22:36:22.610782   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:22.717] E0812 22:36:22.716946   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:23.380] E0812 22:36:23.379909   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:23.482] E0812 22:36:23.482300   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:23.613] E0812 22:36:23.612838   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:23.719] E0812 22:36:23.718414   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:23.819] +++ exit code: 0
I0812 22:36:23.824] Recording: run_configmap_tests
I0812 22:36:23.825] Running command: run_configmap_tests
I0812 22:36:23.843] 
I0812 22:36:23.845] +++ Running case: test-cmd.run_configmap_tests 
I0812 22:36:23.847] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0812 22:36:24.953] configmap/test-binary-configmap created
I0812 22:36:25.038] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0812 22:36:25.125] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0812 22:36:25.357] (Bconfigmap "test-configmap" deleted
I0812 22:36:25.436] configmap "test-binary-configmap" deleted
I0812 22:36:25.514] namespace "test-configmaps" deleted
W0812 22:36:25.615] E0812 22:36:24.380974   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:25.616] E0812 22:36:24.483755   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:25.616] E0812 22:36:24.614389   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:25.617] E0812 22:36:24.720073   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:25.617] E0812 22:36:25.382537   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:25.617] E0812 22:36:25.485135   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:25.618] E0812 22:36:25.616655   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:25.722] E0812 22:36:25.721679   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:26.384] E0812 22:36:26.384242   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:26.487] E0812 22:36:26.486910   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:26.619] E0812 22:36:26.618705   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:26.724] E0812 22:36:26.723657   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:27.386] E0812 22:36:27.386074   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:27.489] E0812 22:36:27.488708   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:27.621] E0812 22:36:27.620976   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:27.727] E0812 22:36:27.726143   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:28.389] E0812 22:36:28.388721   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:28.491] E0812 22:36:28.490537   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:28.624] E0812 22:36:28.623827   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:28.729] E0812 22:36:28.728483   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:29.391] E0812 22:36:29.390963   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:29.493] E0812 22:36:29.492255   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:29.627] E0812 22:36:29.626485   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:29.732] E0812 22:36:29.731327   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:30.394] E0812 22:36:30.393144   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:30.494] E0812 22:36:30.494248   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:30.628] E0812 22:36:30.627827   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:30.729] +++ exit code: 0
I0812 22:36:30.737] Recording: run_client_config_tests
I0812 22:36:30.738] Running command: run_client_config_tests
I0812 22:36:30.764] 
I0812 22:36:30.767] +++ Running case: test-cmd.run_client_config_tests 
I0812 22:36:30.770] +++ working dir: /go/src/k8s.io/kubernetes
I0812 22:36:30.773] +++ command: run_client_config_tests
I0812 22:36:30.788] +++ [0812 22:36:30] Creating namespace namespace-1565649390-3874
I0812 22:36:30.871] namespace/namespace-1565649390-3874 created
I0812 22:36:30.949] Context "test" modified.
I0812 22:36:30.956] +++ [0812 22:36:30] Testing client config
I0812 22:36:31.036] Successful
I0812 22:36:31.036] message:error: stat missing: no such file or directory
I0812 22:36:31.036] has:missing: no such file or directory
I0812 22:36:31.116] Successful
I0812 22:36:31.117] message:error: stat missing: no such file or directory
I0812 22:36:31.117] has:missing: no such file or directory
I0812 22:36:31.198] Successful
I0812 22:36:31.199] message:error: stat missing: no such file or directory
I0812 22:36:31.199] has:missing: no such file or directory
I0812 22:36:31.282] Successful
I0812 22:36:31.283] message:Error in configuration: context was not found for specified context: missing-context
I0812 22:36:31.283] has:context was not found for specified context: missing-context
I0812 22:36:31.366] Successful
I0812 22:36:31.367] message:error: no server found for cluster "missing-cluster"
I0812 22:36:31.367] has:no server found for cluster "missing-cluster"
I0812 22:36:31.450] Successful
I0812 22:36:31.450] message:error: auth info "missing-user" does not exist
I0812 22:36:31.450] has:auth info "missing-user" does not exist
W0812 22:36:31.551] E0812 22:36:30.733180   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:31.551] E0812 22:36:31.394984   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:31.552] E0812 22:36:31.495754   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:31.630] E0812 22:36:31.629349   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:31.731] Successful
I0812 22:36:31.731] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0812 22:36:31.731] has:error loading config file
I0812 22:36:31.731] Successful
I0812 22:36:31.731] message:error: stat missing-config: no such file or directory
I0812 22:36:31.731] has:no such file or directory
I0812 22:36:31.731] +++ exit code: 0
I0812 22:36:31.747] Recording: run_service_accounts_tests
I0812 22:36:31.747] Running command: run_service_accounts_tests
I0812 22:36:31.770] 
I0812 22:36:31.772] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 2 lines ...
I0812 22:36:31.794] +++ [0812 22:36:31] Creating namespace namespace-1565649391-17378
I0812 22:36:31.876] namespace/namespace-1565649391-17378 created
I0812 22:36:31.952] Context "test" modified.
I0812 22:36:31.960] +++ [0812 22:36:31] Testing service accounts
I0812 22:36:32.063] core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
I0812 22:36:32.145] (Bnamespace/test-service-accounts created
W0812 22:36:32.247] E0812 22:36:31.735002   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:32.347] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0812 22:36:32.348] (Bserviceaccount/test-service-account created
I0812 22:36:32.442] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0812 22:36:32.529] (Bserviceaccount "test-service-account" deleted
I0812 22:36:32.620] namespace "test-service-accounts" deleted
W0812 22:36:32.721] E0812 22:36:32.396697   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:32.722] E0812 22:36:32.497971   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:32.722] E0812 22:36:32.631241   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:32.738] E0812 22:36:32.737700   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:33.400] E0812 22:36:33.399402   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:33.502] E0812 22:36:33.500932   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:33.635] E0812 22:36:33.634014   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:33.741] E0812 22:36:33.740234   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:34.403] E0812 22:36:34.401852   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:34.504] E0812 22:36:34.503749   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:34.638] E0812 22:36:34.637001   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:34.744] E0812 22:36:34.742862   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:35.405] E0812 22:36:35.404155   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:35.507] E0812 22:36:35.505887   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:35.639] E0812 22:36:35.638989   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:35.746] E0812 22:36:35.745182   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:36.407] E0812 22:36:36.406216   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:36.508] E0812 22:36:36.507943   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:36.643] E0812 22:36:36.642276   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:36.748] E0812 22:36:36.747211   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:37.408] E0812 22:36:37.408024   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:37.510] E0812 22:36:37.509828   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:37.644] E0812 22:36:37.643521   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:37.749] E0812 22:36:37.748387   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:37.850] +++ exit code: 0
I0812 22:36:37.850] Recording: run_job_tests
I0812 22:36:37.850] Running command: run_job_tests
I0812 22:36:37.850] 
I0812 22:36:37.850] +++ Running case: test-cmd.run_job_tests 
I0812 22:36:37.850] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0812 22:36:38.698] Labels:                        run=pi
I0812 22:36:38.698] Annotations:                   <none>
I0812 22:36:38.699] Schedule:                      59 23 31 2 *
I0812 22:36:38.699] Concurrency Policy:            Allow
I0812 22:36:38.699] Suspend:                       False
I0812 22:36:38.699] Successful Job History Limit:  3
I0812 22:36:38.699] Failed Job History Limit:      1
I0812 22:36:38.699] Starting Deadline Seconds:     <unset>
I0812 22:36:38.699] Selector:                      <unset>
I0812 22:36:38.699] Parallelism:                   <unset>
I0812 22:36:38.699] Completions:                   <unset>
I0812 22:36:38.699] Pod Template:
I0812 22:36:38.699]   Labels:  run=pi
... skipping 22 lines ...
I0812 22:36:38.891] batch.sh:48: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: 
I0812 22:36:38.987] (Bjob.batch/test-job created
I0812 22:36:39.089] batch.sh:53: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job
I0812 22:36:39.171] (BNAME       COMPLETIONS   DURATION   AGE
I0812 22:36:39.172] test-job   0/1           1s         1s
W0812 22:36:39.273] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0812 22:36:39.274] E0812 22:36:38.409744   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:39.274] E0812 22:36:38.512451   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:39.274] E0812 22:36:38.645466   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:39.274] E0812 22:36:38.749961   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:39.275] I0812 22:36:38.988353   53188 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"9a72f812-ce99-4306-98f1-b08900f5a2fc", APIVersion:"batch/v1", ResourceVersion:"1352", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-2cxbq
I0812 22:36:39.375] Name:           test-job
I0812 22:36:39.375] Namespace:      test-jobs
I0812 22:36:39.376] Selector:       controller-uid=9a72f812-ce99-4306-98f1-b08900f5a2fc
I0812 22:36:39.376] Labels:         controller-uid=9a72f812-ce99-4306-98f1-b08900f5a2fc
I0812 22:36:39.376]                 job-name=test-job
I0812 22:36:39.376]                 run=pi
I0812 22:36:39.376] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0812 22:36:39.377] Controlled By:  CronJob/pi
I0812 22:36:39.377] Parallelism:    1
I0812 22:36:39.377] Completions:    1
I0812 22:36:39.377] Start Time:     Mon, 12 Aug 2019 22:36:38 +0000
I0812 22:36:39.377] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0812 22:36:39.377] Pod Template:
I0812 22:36:39.377]   Labels:  controller-uid=9a72f812-ce99-4306-98f1-b08900f5a2fc
I0812 22:36:39.377]            job-name=test-job
I0812 22:36:39.377]            run=pi
I0812 22:36:39.377]   Containers:
I0812 22:36:39.377]    pi:
... skipping 15 lines ...
I0812 22:36:39.379]   Type    Reason            Age   From            Message
I0812 22:36:39.379]   ----    ------            ----  ----            -------
I0812 22:36:39.379]   Normal  SuccessfulCreate  1s    job-controller  Created pod: test-job-2cxbq
I0812 22:36:39.379] job.batch "test-job" deleted
I0812 22:36:39.471] cronjob.batch "pi" deleted
I0812 22:36:39.566] namespace "test-jobs" deleted
W0812 22:36:39.667] E0812 22:36:39.411861   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:39.668] E0812 22:36:39.514539   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:39.668] E0812 22:36:39.647957   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:39.752] E0812 22:36:39.751923   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:40.414] E0812 22:36:40.413958   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:40.517] E0812 22:36:40.516287   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:40.650] E0812 22:36:40.649937   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:40.754] E0812 22:36:40.753831   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:41.416] E0812 22:36:41.415814   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:41.519] E0812 22:36:41.518309   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:41.653] E0812 22:36:41.652104   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:41.756] E0812 22:36:41.756008   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:42.418] E0812 22:36:42.417810   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:42.521] E0812 22:36:42.520204   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:42.655] E0812 22:36:42.654504   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:42.759] E0812 22:36:42.758337   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:43.420] E0812 22:36:43.419795   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:43.523] E0812 22:36:43.522387   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:43.657] E0812 22:36:43.656499   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:43.761] E0812 22:36:43.760249   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:44.422] E0812 22:36:44.421381   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:44.525] E0812 22:36:44.524365   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:44.658] E0812 22:36:44.658012   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:44.759] +++ exit code: 0
I0812 22:36:44.772] Recording: run_create_job_tests
I0812 22:36:44.772] Running command: run_create_job_tests
I0812 22:36:44.796] 
I0812 22:36:44.799] +++ Running case: test-cmd.run_create_job_tests 
I0812 22:36:44.801] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 2 lines ...
I0812 22:36:44.904] namespace/namespace-1565649404-673 created
I0812 22:36:44.979] Context "test" modified.
I0812 22:36:45.070] job.batch/test-job created
I0812 22:36:45.174] create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
I0812 22:36:45.262] (Bjob.batch "test-job" deleted
I0812 22:36:45.352] job.batch/test-job-pi created
W0812 22:36:45.453] E0812 22:36:44.762144   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:45.454] I0812 22:36:45.069345   53188 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565649404-673", Name:"test-job", UID:"ba2f9528-c40e-4229-b70a-4304752ff418", APIVersion:"batch/v1", ResourceVersion:"1369", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-qjhtk
W0812 22:36:45.454] I0812 22:36:45.348907   53188 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565649404-673", Name:"test-job-pi", UID:"90d8ae37-2894-4e8c-8431-4f7000e912c8", APIVersion:"batch/v1", ResourceVersion:"1377", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-l9wqp
W0812 22:36:45.455] E0812 22:36:45.423094   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:45.526] E0812 22:36:45.526090   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:45.627] create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
I0812 22:36:45.628] (Bjob.batch "test-job-pi" deleted
I0812 22:36:45.650] cronjob.batch/test-pi created
W0812 22:36:45.750] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0812 22:36:45.751] E0812 22:36:45.659519   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:45.755] I0812 22:36:45.754414   53188 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565649404-673", Name:"my-pi", UID:"56aa7e92-9740-4307-ac5d-cbd7f21ab87d", APIVersion:"batch/v1", ResourceVersion:"1385", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-692c2
W0812 22:36:45.764] E0812 22:36:45.763399   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:45.864] job.batch/my-pi created
I0812 22:36:45.865] Successful
I0812 22:36:45.865] message:[perl -Mbignum=bpi -wle print bpi(10)]
I0812 22:36:45.865] has:perl -Mbignum=bpi -wle print bpi(10)
I0812 22:36:45.946] job.batch "my-pi" deleted
I0812 22:36:46.032] cronjob.batch "test-pi" deleted
... skipping 10 lines ...
I0812 22:36:46.313] +++ [0812 22:36:46] Testing pod templates
I0812 22:36:46.405] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:36:46.581] (Bpodtemplate/nginx created
I0812 22:36:46.679] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0812 22:36:46.761] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0812 22:36:46.762] nginx   nginx        nginx    name=nginx
W0812 22:36:46.863] E0812 22:36:46.424938   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:46.863] E0812 22:36:46.527424   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:46.863] I0812 22:36:46.577648   49714 controller.go:606] quota admission added evaluator for: podtemplates
W0812 22:36:46.863] E0812 22:36:46.661227   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:46.864] E0812 22:36:46.765199   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:46.964] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0812 22:36:47.031] (Bpodtemplate "nginx" deleted
I0812 22:36:47.132] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:36:47.145] (B+++ exit code: 0
I0812 22:36:47.186] Recording: run_service_tests
I0812 22:36:47.186] Running command: run_service_tests
... skipping 2 lines ...
I0812 22:36:47.216] +++ working dir: /go/src/k8s.io/kubernetes
I0812 22:36:47.219] +++ command: run_service_tests
I0812 22:36:47.295] Context "test" modified.
I0812 22:36:47.302] +++ [0812 22:36:47] Testing kubectl(v1:services)
I0812 22:36:47.399] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0812 22:36:47.567] (Bservice/redis-master created
W0812 22:36:47.668] E0812 22:36:47.426844   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:47.669] E0812 22:36:47.529197   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:47.669] E0812 22:36:47.663694   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:47.767] E0812 22:36:47.766825   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:47.868] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0812 22:36:47.869] (Bcore.sh:864: Successful describe services redis-master:
I0812 22:36:47.869] Name:              redis-master
I0812 22:36:47.869] Namespace:         default
I0812 22:36:47.869] Labels:            app=redis
I0812 22:36:47.869]                    role=master
... skipping 301 lines ...
I0812 22:36:49.337]   selector:
I0812 22:36:49.337]     role: padawan
I0812 22:36:49.337]   sessionAffinity: None
I0812 22:36:49.337]   type: ClusterIP
I0812 22:36:49.337] status:
I0812 22:36:49.337]   loadBalancer: {}
W0812 22:36:49.438] E0812 22:36:48.428552   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:49.438] E0812 22:36:48.530986   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:49.439] E0812 22:36:48.665405   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:49.439] E0812 22:36:48.768725   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:49.439] error: you must specify resources by --filename when --local is set.
W0812 22:36:49.439] Example resource specifications include:
W0812 22:36:49.439]    '-f rsrc.yaml'
W0812 22:36:49.439]    '--filename=rsrc.json'
W0812 22:36:49.439] E0812 22:36:49.430094   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:49.533] E0812 22:36:49.532961   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:49.634] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0812 22:36:49.702] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0812 22:36:49.803] (Bservice "redis-master" deleted
I0812 22:36:49.906] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0812 22:36:50.009] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0812 22:36:50.186] (Bservice/redis-master created
W0812 22:36:50.287] E0812 22:36:49.667115   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:50.288] E0812 22:36:49.770784   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:50.388] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0812 22:36:50.400] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0812 22:36:50.574] (Bservice/service-v1-test created
W0812 22:36:50.675] E0812 22:36:50.432141   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:50.676] E0812 22:36:50.535086   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:50.676] E0812 22:36:50.670290   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:50.772] E0812 22:36:50.772150   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:50.873] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0812 22:36:50.874] (Bservice/service-v1-test replaced
I0812 22:36:50.968] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0812 22:36:51.055] (Bservice "redis-master" deleted
I0812 22:36:51.150] service "service-v1-test" deleted
I0812 22:36:51.249] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0812 22:36:51.345] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0812 22:36:51.507] (Bservice/redis-master created
W0812 22:36:51.609] E0812 22:36:51.433856   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:51.609] E0812 22:36:51.537165   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:51.673] E0812 22:36:51.672097   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:51.774] service/redis-slave created
I0812 22:36:51.808] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0812 22:36:51.902] (BSuccessful
I0812 22:36:51.903] message:NAME           RSRC
I0812 22:36:51.903] kubernetes     144
I0812 22:36:51.903] redis-master   1420
... skipping 5 lines ...
I0812 22:36:52.192] core.sh:986: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0812 22:36:52.294] (Bcore.sh:990: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0812 22:36:52.373] (Bservice/beep-boop created
I0812 22:36:52.474] core.sh:994: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I0812 22:36:52.570] (Bcore.sh:998: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I0812 22:36:52.663] (Bservice "beep-boop" deleted
W0812 22:36:52.764] E0812 22:36:51.774882   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:52.765] E0812 22:36:52.435238   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:52.765] E0812 22:36:52.539229   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:52.765] E0812 22:36:52.673714   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:52.777] E0812 22:36:52.776799   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:52.878] core.sh:1005: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0812 22:36:52.878] (Bcore.sh:1009: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:36:52.978] (Bservice/testmetadata created
I0812 22:36:52.979] deployment.apps/testmetadata created
I0812 22:36:53.086] core.sh:1013: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
I0812 22:36:53.186] (Bcore.sh:1014: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
... skipping 16 lines ...
I0812 22:36:53.959] apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:36:54.150] (Bdaemonset.apps/bind created
W0812 22:36:54.251] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0812 22:36:54.252] I0812 22:36:52.959953   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"af921690-77de-46a2-a1cc-727e23311767", APIVersion:"apps/v1", ResourceVersion:"1435", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-6cdd84c77d to 2
W0812 22:36:54.252] I0812 22:36:52.968640   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"09ceda48-1ea2-4dc5-9874-05bd795c91d9", APIVersion:"apps/v1", ResourceVersion:"1436", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-glmp7
W0812 22:36:54.252] I0812 22:36:52.974834   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"09ceda48-1ea2-4dc5-9874-05bd795c91d9", APIVersion:"apps/v1", ResourceVersion:"1436", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-d4qh5
W0812 22:36:54.252] E0812 22:36:53.436958   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:54.253] E0812 22:36:53.540746   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:54.253] E0812 22:36:53.675526   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:54.253] E0812 22:36:53.778750   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:54.253] I0812 22:36:54.146297   49714 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0812 22:36:54.253] I0812 22:36:54.158636   49714 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0812 22:36:54.354] apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
I0812 22:36:54.428] (Bdaemonset.apps/bind configured
W0812 22:36:54.529] E0812 22:36:54.439089   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:54.543] E0812 22:36:54.542422   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:54.644] apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
I0812 22:36:54.644] (Bdaemonset.apps/bind image updated
I0812 22:36:54.739] apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
I0812 22:36:54.839] (Bdaemonset.apps/bind env updated
I0812 22:36:54.941] apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
I0812 22:36:55.043] (Bdaemonset.apps/bind resource requirements updated
W0812 22:36:55.144] E0812 22:36:54.677813   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:55.145] E0812 22:36:54.780346   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:55.245] apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
I0812 22:36:55.265] (Bdaemonset.apps/bind restarted
I0812 22:36:55.373] apps.sh:48: Successful get daemonsets bind {{.metadata.generation}}: 5
I0812 22:36:55.462] (Bdaemonset.apps "bind" deleted
I0812 22:36:55.491] +++ exit code: 0
I0812 22:36:55.534] Recording: run_daemonset_history_tests
... skipping 3 lines ...
I0812 22:36:55.567] +++ working dir: /go/src/k8s.io/kubernetes
I0812 22:36:55.570] +++ command: run_daemonset_history_tests
I0812 22:36:55.585] +++ [0812 22:36:55] Creating namespace namespace-1565649415-10759
I0812 22:36:55.670] namespace/namespace-1565649415-10759 created
I0812 22:36:55.749] Context "test" modified.
I0812 22:36:55.757] +++ [0812 22:36:55] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
W0812 22:36:55.858] E0812 22:36:55.441014   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:55.858] E0812 22:36:55.544090   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:55.859] E0812 22:36:55.679306   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:55.859] E0812 22:36:55.782150   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:55.960] apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:36:56.043] (Bdaemonset.apps/bind created
I0812 22:36:56.164] apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1565649415-10759"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0812 22:36:56.164]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
I0812 22:36:56.277] (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
I0812 22:36:56.382] apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0812 22:36:56.497] (Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0812 22:36:56.689] (Bdaemonset.apps/bind configured
W0812 22:36:56.791] E0812 22:36:56.442722   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:56.791] E0812 22:36:56.545991   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:56.792] E0812 22:36:56.680758   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:56.792] E0812 22:36:56.783673   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:56.892] apps.sh:77: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0812 22:36:56.901] (Bapps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0812 22:36:57.004] (Bapps.sh:79: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0812 22:36:57.113] (Bapps.sh:80: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1565649415-10759"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:latest","name":"kubernetes-pause"},{"image":"k8s.gcr.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0812 22:36:57.114]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1565649415-10759"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0812 22:36:57.114]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
... skipping 12 lines ...
I0812 22:36:57.434] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0812 22:36:57.536] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0812 22:36:57.641] (Bdaemonset.apps/bind rolled back
I0812 22:36:57.743] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0812 22:36:57.839] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0812 22:36:57.947] (BSuccessful
I0812 22:36:57.948] message:error: unable to find specified revision 1000000 in history
I0812 22:36:57.948] has:unable to find specified revision
I0812 22:36:58.036] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0812 22:36:58.128] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0812 22:36:58.233] (Bdaemonset.apps/bind rolled back
I0812 22:36:58.332] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0812 22:36:58.427] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 9 lines ...
I0812 22:36:58.721] +++ [0812 22:36:58] Creating namespace namespace-1565649418-12051
I0812 22:36:58.803] namespace/namespace-1565649418-12051 created
I0812 22:36:58.875] Context "test" modified.
I0812 22:36:58.882] +++ [0812 22:36:58] Testing kubectl(v1:replicationcontrollers)
I0812 22:36:58.974] core.sh:1046: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:36:59.136] (Breplicationcontroller/frontend created
W0812 22:36:59.237] E0812 22:36:57.444300   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:59.237] E0812 22:36:57.547733   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:59.243] E0812 22:36:57.664722   53188 daemon_controller.go:302] namespace-1565649415-10759/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1565649415-10759", SelfLink:"/apis/apps/v1/namespaces/namespace-1565649415-10759/daemonsets/bind", UID:"8f176597-3d2d-45e4-b673-dd418b14694c", ResourceVersion:"1504", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701246216, loc:(*time.Location)(0x71fa160)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1565649415-10759\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001829660), Fields:(*v1.Fields)(0xc001829680)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0018296a0), Fields:(*v1.Fields)(0xc0018296c0)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0018296e0), Fields:(*v1.Fields)(0xc001829700)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001829720), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0008af3d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0010be0c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001829740), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000a9ec80)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0008af42c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0812 22:36:59.243] E0812 22:36:57.682570   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:59.244] E0812 22:36:57.785520   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:59.244] E0812 22:36:58.446119   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:59.244] E0812 22:36:58.549291   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:59.244] E0812 22:36:58.683934   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:59.245] E0812 22:36:58.787140   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:59.245] I0812 22:36:59.144058   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"237d9944-d8fa-4cad-bad4-5ff33a84c156", APIVersion:"v1", ResourceVersion:"1515", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tlq8g
W0812 22:36:59.246] I0812 22:36:59.148838   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"237d9944-d8fa-4cad-bad4-5ff33a84c156", APIVersion:"v1", ResourceVersion:"1515", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-m4trl
W0812 22:36:59.246] I0812 22:36:59.149774   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"237d9944-d8fa-4cad-bad4-5ff33a84c156", APIVersion:"v1", ResourceVersion:"1515", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7f5l7
I0812 22:36:59.347] replicationcontroller "frontend" deleted
I0812 22:36:59.360] core.sh:1051: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:36:59.458] (Bcore.sh:1055: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:36:59.626] (Breplicationcontroller/frontend created
W0812 22:36:59.727] E0812 22:36:59.447309   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:59.728] E0812 22:36:59.550664   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:59.728] I0812 22:36:59.631330   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"8a555042-8ca8-492f-aab3-f676dcfea075", APIVersion:"v1", ResourceVersion:"1532", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kdstp
W0812 22:36:59.728] I0812 22:36:59.636124   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"8a555042-8ca8-492f-aab3-f676dcfea075", APIVersion:"v1", ResourceVersion:"1532", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7dldz
W0812 22:36:59.729] I0812 22:36:59.636587   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"8a555042-8ca8-492f-aab3-f676dcfea075", APIVersion:"v1", ResourceVersion:"1532", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9znsl
W0812 22:36:59.729] E0812 22:36:59.686256   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:36:59.789] E0812 22:36:59.788891   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:36:59.890] core.sh:1059: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0812 22:36:59.898] (Bcore.sh:1061: Successful describe rc frontend:
I0812 22:36:59.899] Name:         frontend
I0812 22:36:59.899] Namespace:    namespace-1565649418-12051
I0812 22:36:59.899] Selector:     app=guestbook,tier=frontend
I0812 22:36:59.899] Labels:       app=guestbook
I0812 22:36:59.899]               tier=frontend
I0812 22:36:59.899] Annotations:  <none>
I0812 22:36:59.900] Replicas:     3 current / 3 desired
I0812 22:36:59.900] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0812 22:36:59.900] Pod Template:
I0812 22:36:59.900]   Labels:  app=guestbook
I0812 22:36:59.900]            tier=frontend
I0812 22:36:59.900]   Containers:
I0812 22:36:59.900]    php-redis:
I0812 22:36:59.900]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0812 22:37:00.022] Namespace:    namespace-1565649418-12051
I0812 22:37:00.022] Selector:     app=guestbook,tier=frontend
I0812 22:37:00.022] Labels:       app=guestbook
I0812 22:37:00.022]               tier=frontend
I0812 22:37:00.022] Annotations:  <none>
I0812 22:37:00.022] Replicas:     3 current / 3 desired
I0812 22:37:00.022] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0812 22:37:00.022] Pod Template:
I0812 22:37:00.023]   Labels:  app=guestbook
I0812 22:37:00.023]            tier=frontend
I0812 22:37:00.023]   Containers:
I0812 22:37:00.023]    php-redis:
I0812 22:37:00.023]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0812 22:37:00.135] Namespace:    namespace-1565649418-12051
I0812 22:37:00.135] Selector:     app=guestbook,tier=frontend
I0812 22:37:00.135] Labels:       app=guestbook
I0812 22:37:00.135]               tier=frontend
I0812 22:37:00.135] Annotations:  <none>
I0812 22:37:00.135] Replicas:     3 current / 3 desired
I0812 22:37:00.135] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0812 22:37:00.135] Pod Template:
I0812 22:37:00.135]   Labels:  app=guestbook
I0812 22:37:00.135]            tier=frontend
I0812 22:37:00.135]   Containers:
I0812 22:37:00.136]    php-redis:
I0812 22:37:00.136]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0812 22:37:00.258] Namespace:    namespace-1565649418-12051
I0812 22:37:00.258] Selector:     app=guestbook,tier=frontend
I0812 22:37:00.258] Labels:       app=guestbook
I0812 22:37:00.258]               tier=frontend
I0812 22:37:00.258] Annotations:  <none>
I0812 22:37:00.259] Replicas:     3 current / 3 desired
I0812 22:37:00.259] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0812 22:37:00.259] Pod Template:
I0812 22:37:00.259]   Labels:  app=guestbook
I0812 22:37:00.259]            tier=frontend
I0812 22:37:00.259]   Containers:
I0812 22:37:00.259]    php-redis:
I0812 22:37:00.259]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0812 22:37:00.417] Namespace:    namespace-1565649418-12051
I0812 22:37:00.417] Selector:     app=guestbook,tier=frontend
I0812 22:37:00.417] Labels:       app=guestbook
I0812 22:37:00.417]               tier=frontend
I0812 22:37:00.417] Annotations:  <none>
I0812 22:37:00.417] Replicas:     3 current / 3 desired
I0812 22:37:00.418] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0812 22:37:00.418] Pod Template:
I0812 22:37:00.418]   Labels:  app=guestbook
I0812 22:37:00.418]            tier=frontend
I0812 22:37:00.418]   Containers:
I0812 22:37:00.418]    php-redis:
I0812 22:37:00.418]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0812 22:37:00.536] Namespace:    namespace-1565649418-12051
I0812 22:37:00.536] Selector:     app=guestbook,tier=frontend
I0812 22:37:00.537] Labels:       app=guestbook
I0812 22:37:00.537]               tier=frontend
I0812 22:37:00.537] Annotations:  <none>
I0812 22:37:00.537] Replicas:     3 current / 3 desired
I0812 22:37:00.537] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0812 22:37:00.537] Pod Template:
I0812 22:37:00.537]   Labels:  app=guestbook
I0812 22:37:00.537]            tier=frontend
I0812 22:37:00.538]   Containers:
I0812 22:37:00.538]    php-redis:
I0812 22:37:00.538]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0812 22:37:00.652] Namespace:    namespace-1565649418-12051
I0812 22:37:00.652] Selector:     app=guestbook,tier=frontend
I0812 22:37:00.652] Labels:       app=guestbook
I0812 22:37:00.653]               tier=frontend
I0812 22:37:00.653] Annotations:  <none>
I0812 22:37:00.653] Replicas:     3 current / 3 desired
I0812 22:37:00.653] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0812 22:37:00.653] Pod Template:
I0812 22:37:00.653]   Labels:  app=guestbook
I0812 22:37:00.653]            tier=frontend
I0812 22:37:00.653]   Containers:
I0812 22:37:00.654]    php-redis:
I0812 22:37:00.654]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0812 22:37:00.777] Namespace:    namespace-1565649418-12051
I0812 22:37:00.778] Selector:     app=guestbook,tier=frontend
I0812 22:37:00.778] Labels:       app=guestbook
I0812 22:37:00.778]               tier=frontend
I0812 22:37:00.778] Annotations:  <none>
I0812 22:37:00.778] Replicas:     3 current / 3 desired
I0812 22:37:00.778] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0812 22:37:00.778] Pod Template:
I0812 22:37:00.778]   Labels:  app=guestbook
I0812 22:37:00.778]            tier=frontend
I0812 22:37:00.778]   Containers:
I0812 22:37:00.778]    php-redis:
I0812 22:37:00.778]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0812 22:37:00.779]   ----    ------            ----  ----                    -------
I0812 22:37:00.780]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-kdstp
I0812 22:37:00.780]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-7dldz
I0812 22:37:00.780]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-9znsl
I0812 22:37:00.874] (Bcore.sh:1079: Successful get rc frontend {{.spec.replicas}}: 3
I0812 22:37:00.972] (Breplicationcontroller/frontend scaled
W0812 22:37:01.073] E0812 22:37:00.448631   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:01.073] E0812 22:37:00.552070   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:01.073] E0812 22:37:00.688381   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:01.074] E0812 22:37:00.790564   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:01.074] I0812 22:37:00.979822   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"8a555042-8ca8-492f-aab3-f676dcfea075", APIVersion:"v1", ResourceVersion:"1541", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-9znsl
I0812 22:37:01.174] core.sh:1083: Successful get rc frontend {{.spec.replicas}}: 2
I0812 22:37:01.176] (Bcore.sh:1087: Successful get rc frontend {{.spec.replicas}}: 2
I0812 22:37:01.400] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 2
I0812 22:37:01.501] (Bcore.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I0812 22:37:01.605] (Breplicationcontroller/frontend scaled
W0812 22:37:01.706] error: Expected replicas to be 3, was 2
W0812 22:37:01.706] E0812 22:37:01.450659   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:01.706] E0812 22:37:01.554291   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:01.707] I0812 22:37:01.610358   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"8a555042-8ca8-492f-aab3-f676dcfea075", APIVersion:"v1", ResourceVersion:"1548", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pxfmj
W0812 22:37:01.707] E0812 22:37:01.690775   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:01.793] E0812 22:37:01.792233   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:01.893] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I0812 22:37:01.894] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I0812 22:37:01.912] (Breplicationcontroller/frontend scaled
W0812 22:37:02.013] I0812 22:37:01.919666   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"8a555042-8ca8-492f-aab3-f676dcfea075", APIVersion:"v1", ResourceVersion:"1553", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-pxfmj
I0812 22:37:02.114] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I0812 22:37:02.128] (Breplicationcontroller "frontend" deleted
I0812 22:37:02.312] replicationcontroller/redis-master created
W0812 22:37:02.414] I0812 22:37:02.318091   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"redis-master", UID:"6c800e80-506b-4357-bd71-11a367a3d21e", APIVersion:"v1", ResourceVersion:"1564", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-2sb49
W0812 22:37:02.453] E0812 22:37:02.452314   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:02.499] I0812 22:37:02.497975   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"redis-slave", UID:"49ea86bd-499d-4c25-b882-462e2da9bb2c", APIVersion:"v1", ResourceVersion:"1569", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-6hddz
W0812 22:37:02.503] I0812 22:37:02.502860   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"redis-slave", UID:"49ea86bd-499d-4c25-b882-462e2da9bb2c", APIVersion:"v1", ResourceVersion:"1569", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-r498k
W0812 22:37:02.557] E0812 22:37:02.556293   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:02.607] I0812 22:37:02.606318   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"redis-master", UID:"6c800e80-506b-4357-bd71-11a367a3d21e", APIVersion:"v1", ResourceVersion:"1576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-jqjvc
W0812 22:37:02.611] I0812 22:37:02.610395   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"redis-master", UID:"6c800e80-506b-4357-bd71-11a367a3d21e", APIVersion:"v1", ResourceVersion:"1576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-hsb89
W0812 22:37:02.613] I0812 22:37:02.612762   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"redis-master", UID:"6c800e80-506b-4357-bd71-11a367a3d21e", APIVersion:"v1", ResourceVersion:"1576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-mrv95
W0812 22:37:02.616] I0812 22:37:02.615758   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"redis-slave", UID:"49ea86bd-499d-4c25-b882-462e2da9bb2c", APIVersion:"v1", ResourceVersion:"1580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-jpk6t
W0812 22:37:02.621] I0812 22:37:02.620975   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"redis-slave", UID:"49ea86bd-499d-4c25-b882-462e2da9bb2c", APIVersion:"v1", ResourceVersion:"1580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-6vfwg
W0812 22:37:02.693] E0812 22:37:02.692599   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:02.794] E0812 22:37:02.793567   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:02.895] replicationcontroller/redis-slave created
I0812 22:37:02.896] replicationcontroller/redis-master scaled
I0812 22:37:02.896] replicationcontroller/redis-slave scaled
I0812 22:37:02.896] core.sh:1117: Successful get rc redis-master {{.spec.replicas}}: 4
I0812 22:37:02.896] (Bcore.sh:1118: Successful get rc redis-slave {{.spec.replicas}}: 4
I0812 22:37:02.896] (Breplicationcontroller "redis-master" deleted
... skipping 6 lines ...
W0812 22:37:03.224] I0812 22:37:03.223594   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment", UID:"a117fa87-3413-41a9-96e6-16460059e553", APIVersion:"apps/v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-66987bfc58 to 1
W0812 22:37:03.233] I0812 22:37:03.232763   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-66987bfc58", UID:"5c985862-d2a9-4680-ab13-1127beb9945d", APIVersion:"apps/v1", ResourceVersion:"1625", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-66987bfc58-dbmh9
W0812 22:37:03.234] I0812 22:37:03.233169   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-66987bfc58", UID:"5c985862-d2a9-4680-ab13-1127beb9945d", APIVersion:"apps/v1", ResourceVersion:"1625", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-66987bfc58-zj4jd
I0812 22:37:03.335] deployment.apps/nginx-deployment scaled
I0812 22:37:03.335] core.sh:1127: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
I0812 22:37:03.412] (Bdeployment.apps "nginx-deployment" deleted
W0812 22:37:03.512] E0812 22:37:03.454127   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:03.559] E0812 22:37:03.558180   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:03.659] Successful
I0812 22:37:03.660] message:service/expose-test-deployment exposed
I0812 22:37:03.660] has:service/expose-test-deployment exposed
I0812 22:37:03.660] service "expose-test-deployment" deleted
I0812 22:37:03.717] Successful
I0812 22:37:03.717] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0812 22:37:03.717] See 'kubectl expose -h' for help and examples
I0812 22:37:03.717] has:invalid deployment: no selectors
W0812 22:37:03.818] E0812 22:37:03.694424   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:03.819] E0812 22:37:03.795481   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:03.895] I0812 22:37:03.894508   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment", UID:"825ca5ac-6fd0-480d-ac4f-f32f5e45a604", APIVersion:"apps/v1", ResourceVersion:"1649", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-66987bfc58 to 3
W0812 22:37:03.901] I0812 22:37:03.900783   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-66987bfc58", UID:"7ce9a29d-7288-452c-906a-74f7a1d00319", APIVersion:"apps/v1", ResourceVersion:"1650", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-6b2sz
W0812 22:37:03.908] I0812 22:37:03.907263   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-66987bfc58", UID:"7ce9a29d-7288-452c-906a-74f7a1d00319", APIVersion:"apps/v1", ResourceVersion:"1650", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-mzkdx
W0812 22:37:03.912] I0812 22:37:03.912178   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-66987bfc58", UID:"7ce9a29d-7288-452c-906a-74f7a1d00319", APIVersion:"apps/v1", ResourceVersion:"1650", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-66987bfc58-pc5hf
I0812 22:37:04.013] deployment.apps/nginx-deployment created
I0812 22:37:04.014] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0812 22:37:04.098] (Bservice/nginx-deployment exposed
I0812 22:37:04.200] core.sh:1150: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
I0812 22:37:04.290] (Bdeployment.apps "nginx-deployment" deleted
I0812 22:37:04.301] service "nginx-deployment" deleted
W0812 22:37:04.456] E0812 22:37:04.455985   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:04.501] I0812 22:37:04.500315   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"50dc41d8-1d7c-4969-997e-7ac3d345515c", APIVersion:"v1", ResourceVersion:"1677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-sbshn
W0812 22:37:04.511] I0812 22:37:04.510278   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"50dc41d8-1d7c-4969-997e-7ac3d345515c", APIVersion:"v1", ResourceVersion:"1677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fj6cm
W0812 22:37:04.515] I0812 22:37:04.514812   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"50dc41d8-1d7c-4969-997e-7ac3d345515c", APIVersion:"v1", ResourceVersion:"1677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fhnwj
W0812 22:37:04.560] E0812 22:37:04.559958   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:04.661] replicationcontroller/frontend created
I0812 22:37:04.662] core.sh:1157: Successful get rc frontend {{.spec.replicas}}: 3
I0812 22:37:04.709] (Bservice/frontend exposed
I0812 22:37:04.806] core.sh:1161: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0812 22:37:04.900] (Bservice/frontend-2 exposed
I0812 22:37:05.005] core.sh:1165: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
I0812 22:37:05.185] (Bpod/valid-pod created
W0812 22:37:05.286] E0812 22:37:04.696291   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:05.287] E0812 22:37:04.796894   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:05.387] service/frontend-3 exposed
I0812 22:37:05.409] core.sh:1170: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 444
I0812 22:37:05.518] (Bservice/frontend-4 exposed
W0812 22:37:05.619] E0812 22:37:05.457602   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:05.619] E0812 22:37:05.562347   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:05.699] E0812 22:37:05.698935   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:05.799] E0812 22:37:05.799092   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:05.900] core.sh:1174: Successful get service frontend-4 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
I0812 22:37:05.901] (Bservice/frontend-5 exposed
I0812 22:37:05.901] core.sh:1178: Successful get service frontend-5 {{(index .spec.ports 0).port}}: 80
I0812 22:37:05.952] (Bpod "valid-pod" deleted
I0812 22:37:06.059] service "frontend" deleted
I0812 22:37:06.071] service "frontend-2" deleted
I0812 22:37:06.081] service "frontend-3" deleted
I0812 22:37:06.095] service "frontend-4" deleted
I0812 22:37:06.106] service "frontend-5" deleted
I0812 22:37:06.232] Successful
I0812 22:37:06.232] message:error: cannot expose a Node
I0812 22:37:06.232] has:cannot expose
I0812 22:37:06.350] Successful
I0812 22:37:06.351] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0812 22:37:06.351] has:metadata.name: Invalid value
W0812 22:37:06.460] E0812 22:37:06.459538   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:06.561] Successful
I0812 22:37:06.561] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I0812 22:37:06.561] has:kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I0812 22:37:06.581] service "kubernetes-serve-hostname-testing-sixty-three-characters-in-len" deleted
W0812 22:37:06.682] E0812 22:37:06.564212   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:06.701] E0812 22:37:06.700799   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:06.801] E0812 22:37:06.800518   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:06.902] Successful
I0812 22:37:06.902] message:service/etcd-server exposed
I0812 22:37:06.903] has:etcd-server exposed
I0812 22:37:06.905] core.sh:1208: Successful get service etcd-server {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: port-1 2380
I0812 22:37:06.930] (Bcore.sh:1209: Successful get service etcd-server {{(index .spec.ports 1).name}} {{(index .spec.ports 1).port}}: port-2 2379
I0812 22:37:07.029] (Bservice "etcd-server" deleted
I0812 22:37:07.142] core.sh:1215: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0812 22:37:07.240] (Breplicationcontroller "frontend" deleted
I0812 22:37:07.375] core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:37:07.484] (Bcore.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:37:07.657] (Breplicationcontroller/frontend created
W0812 22:37:07.759] E0812 22:37:07.461445   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:07.759] E0812 22:37:07.566035   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:07.760] I0812 22:37:07.663479   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"7fffdb92-53c5-4ab5-9b91-aebd6199c154", APIVersion:"v1", ResourceVersion:"1741", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bpzzr
W0812 22:37:07.760] I0812 22:37:07.667748   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"7fffdb92-53c5-4ab5-9b91-aebd6199c154", APIVersion:"v1", ResourceVersion:"1741", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-c49hp
W0812 22:37:07.761] I0812 22:37:07.670067   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"7fffdb92-53c5-4ab5-9b91-aebd6199c154", APIVersion:"v1", ResourceVersion:"1741", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xw4cw
W0812 22:37:07.761] E0812 22:37:07.702929   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:07.803] E0812 22:37:07.802806   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:07.848] I0812 22:37:07.847539   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"redis-slave", UID:"e8e8c207-3ad1-484e-9f87-3d7cd1c2175a", APIVersion:"v1", ResourceVersion:"1750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-spww6
W0812 22:37:07.853] I0812 22:37:07.852667   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"redis-slave", UID:"e8e8c207-3ad1-484e-9f87-3d7cd1c2175a", APIVersion:"v1", ResourceVersion:"1750", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-xsd4l
I0812 22:37:07.954] replicationcontroller/redis-slave created
I0812 22:37:07.967] core.sh:1228: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I0812 22:37:08.081] (Bcore.sh:1232: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I0812 22:37:08.175] (Breplicationcontroller "frontend" deleted
I0812 22:37:08.180] replicationcontroller "redis-slave" deleted
I0812 22:37:08.309] core.sh:1236: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:37:08.413] (Bcore.sh:1240: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:37:08.605] (Breplicationcontroller/frontend created
W0812 22:37:08.706] E0812 22:37:08.463692   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:08.707] E0812 22:37:08.567933   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:08.708] I0812 22:37:08.609907   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"16238cc0-53c1-4d0c-b6c4-06b177659437", APIVersion:"v1", ResourceVersion:"1769", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-56gpp
W0812 22:37:08.708] I0812 22:37:08.613646   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"16238cc0-53c1-4d0c-b6c4-06b177659437", APIVersion:"v1", ResourceVersion:"1769", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cxkzl
W0812 22:37:08.709] I0812 22:37:08.615011   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565649418-12051", Name:"frontend", UID:"16238cc0-53c1-4d0c-b6c4-06b177659437", APIVersion:"v1", ResourceVersion:"1769", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xcj45
W0812 22:37:08.709] E0812 22:37:08.706288   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:08.805] E0812 22:37:08.804358   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:08.906] core.sh:1243: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0812 22:37:08.906] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0812 22:37:08.931] core.sh:1246: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0812 22:37:09.028] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0812 22:37:09.132] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0812 22:37:09.246] core.sh:1250: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0812 22:37:09.346] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0812 22:37:09.447] Error: required flag(s) "max" not set
W0812 22:37:09.448] 
W0812 22:37:09.448] 
W0812 22:37:09.448] Examples:
W0812 22:37:09.448]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0812 22:37:09.448]   kubectl autoscale deployment foo --min=2 --max=10
W0812 22:37:09.448]   
... skipping 18 lines ...
W0812 22:37:09.453] 
W0812 22:37:09.453] Usage:
W0812 22:37:09.453]   kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [options]
W0812 22:37:09.453] 
W0812 22:37:09.453] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0812 22:37:09.454] 
W0812 22:37:09.466] E0812 22:37:09.465519   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:09.567] replicationcontroller "frontend" deleted
I0812 22:37:09.648] core.sh:1259: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:37:09.736] (BapiVersion: apps/v1
I0812 22:37:09.736] kind: Deployment
I0812 22:37:09.736] metadata:
I0812 22:37:09.736]   creationTimestamp: null
... skipping 24 lines ...
I0812 22:37:09.739]           limits:
I0812 22:37:09.739]             cpu: 300m
I0812 22:37:09.739]           requests:
I0812 22:37:09.739]             cpu: 300m
I0812 22:37:09.739]       terminationGracePeriodSeconds: 0
I0812 22:37:09.739] status: {}
W0812 22:37:09.840] E0812 22:37:09.569966   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:09.840] E0812 22:37:09.708084   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:09.840] E0812 22:37:09.808082   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:09.840] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0812 22:37:10.027] deployment.apps/nginx-deployment-resources created
W0812 22:37:10.128] I0812 22:37:10.032196   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources", UID:"d74697f7-6b2d-4824-8919-634e013fe0c9", APIVersion:"apps/v1", ResourceVersion:"1790", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6dbb5769d7 to 3
W0812 22:37:10.128] I0812 22:37:10.038660   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources-6dbb5769d7", UID:"f4e89500-90c4-4b58-9333-31a5dfadd59e", APIVersion:"apps/v1", ResourceVersion:"1791", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6dbb5769d7-bqhq4
W0812 22:37:10.129] I0812 22:37:10.043532   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources-6dbb5769d7", UID:"f4e89500-90c4-4b58-9333-31a5dfadd59e", APIVersion:"apps/v1", ResourceVersion:"1791", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6dbb5769d7-595cv
W0812 22:37:10.130] I0812 22:37:10.043657   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources-6dbb5769d7", UID:"f4e89500-90c4-4b58-9333-31a5dfadd59e", APIVersion:"apps/v1", ResourceVersion:"1791", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6dbb5769d7-pkclh
I0812 22:37:10.231] core.sh:1265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I0812 22:37:10.260] (Bcore.sh:1266: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0812 22:37:10.381] (Bcore.sh:1267: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0812 22:37:10.487] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W0812 22:37:10.588] E0812 22:37:10.467183   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:10.589] I0812 22:37:10.494312   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources", UID:"d74697f7-6b2d-4824-8919-634e013fe0c9", APIVersion:"apps/v1", ResourceVersion:"1804", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-58d7fb85cf to 1
W0812 22:37:10.589] I0812 22:37:10.499949   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources-58d7fb85cf", UID:"c2f45230-d305-445d-b922-87d9bf78f317", APIVersion:"apps/v1", ResourceVersion:"1805", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-58d7fb85cf-7j59d
W0812 22:37:10.590] E0812 22:37:10.572279   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:10.690] core.sh:1270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0812 22:37:10.716] (Bcore.sh:1271: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0812 22:37:10.928] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W0812 22:37:11.029] E0812 22:37:10.710284   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:11.030] E0812 22:37:10.809971   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:11.030] error: unable to find container named redis
W0812 22:37:11.031] I0812 22:37:10.949192   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources", UID:"d74697f7-6b2d-4824-8919-634e013fe0c9", APIVersion:"apps/v1", ResourceVersion:"1814", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-58d7fb85cf to 0
W0812 22:37:11.031] I0812 22:37:10.961217   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources-58d7fb85cf", UID:"c2f45230-d305-445d-b922-87d9bf78f317", APIVersion:"apps/v1", ResourceVersion:"1818", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-58d7fb85cf-7j59d
W0812 22:37:11.032] I0812 22:37:10.963866   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources", UID:"d74697f7-6b2d-4824-8919-634e013fe0c9", APIVersion:"apps/v1", ResourceVersion:"1817", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5cd64dc74f to 1
W0812 22:37:11.032] I0812 22:37:10.972865   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources-5cd64dc74f", UID:"df7a6c08-6896-433f-9dad-f1ba17a67a0e", APIVersion:"apps/v1", ResourceVersion:"1822", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5cd64dc74f-lbxzk
I0812 22:37:11.133] core.sh:1276: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0812 22:37:11.180] (Bcore.sh:1277: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0812 22:37:11.292] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W0812 22:37:11.393] I0812 22:37:11.316918   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources", UID:"d74697f7-6b2d-4824-8919-634e013fe0c9", APIVersion:"apps/v1", ResourceVersion:"1835", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-5cd64dc74f to 0
W0812 22:37:11.394] I0812 22:37:11.325179   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources-5cd64dc74f", UID:"df7a6c08-6896-433f-9dad-f1ba17a67a0e", APIVersion:"apps/v1", ResourceVersion:"1839", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-5cd64dc74f-lbxzk
W0812 22:37:11.394] I0812 22:37:11.340052   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources", UID:"d74697f7-6b2d-4824-8919-634e013fe0c9", APIVersion:"apps/v1", ResourceVersion:"1838", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-8586dd678 to 1
W0812 22:37:11.395] I0812 22:37:11.344237   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649418-12051", Name:"nginx-deployment-resources-8586dd678", UID:"38555890-69f9-4fcd-9c5c-a12d5fde57c8", APIVersion:"apps/v1", ResourceVersion:"1845", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-8586dd678-4l2fg
W0812 22:37:11.469] E0812 22:37:11.468750   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:11.570] core.sh:1280: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0812 22:37:11.570] (Bcore.sh:1281: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0812 22:37:11.677] (Bcore.sh:1282: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
I0812 22:37:11.788] (BapiVersion: apps/v1
I0812 22:37:11.789] kind: Deployment
I0812 22:37:11.789] metadata:
... skipping 189 lines ...
I0812 22:37:11.805]     status: "True"
I0812 22:37:11.805]     type: Progressing
I0812 22:37:11.805]   observedGeneration: 4
I0812 22:37:11.805]   replicas: 4
I0812 22:37:11.805]   unavailableReplicas: 4
I0812 22:37:11.806]   updatedReplicas: 1
W0812 22:37:11.906] E0812 22:37:11.574087   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:11.907] E0812 22:37:11.712724   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:11.907] E0812 22:37:11.811508   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:11.907] error: you must specify resources by --filename when --local is set.
W0812 22:37:11.907] Example resource specifications include:
W0812 22:37:11.907]    '-f rsrc.yaml'
W0812 22:37:11.908]    '--filename=rsrc.json'
I0812 22:37:12.008] core.sh:1286: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0812 22:37:12.100] (Bcore.sh:1287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0812 22:37:12.213] (Bcore.sh:1288: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 7 lines ...
I0812 22:37:12.431] +++ command: run_deployment_tests
I0812 22:37:12.448] +++ [0812 22:37:12] Creating namespace namespace-1565649432-25061
I0812 22:37:12.538] namespace/namespace-1565649432-25061 created
I0812 22:37:12.626] Context "test" modified.
I0812 22:37:12.634] +++ [0812 22:37:12] Testing deployments
I0812 22:37:12.726] deployment.apps/test-nginx-extensions created
W0812 22:37:12.826] E0812 22:37:12.470786   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:12.827] E0812 22:37:12.576513   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:12.827] E0812 22:37:12.714433   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:12.828] I0812 22:37:12.732517   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649432-25061", Name:"test-nginx-extensions", UID:"8c991120-d7b3-45e3-b0bf-e18766521e8a", APIVersion:"apps/v1", ResourceVersion:"1872", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-574b6dd4f9 to 1
W0812 22:37:12.828] I0812 22:37:12.740494   53188 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565649432-25061", Name:"test-nginx-extensions-574b6dd4f9", UID:"4bf25236-eeb1-4997-95c0-bf27194ffadc", APIVersion:"apps/v1", ResourceVersion:"1873", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-574b6dd4f9-8rgtv
W0812 22:37:12.828] E0812 22:37:12.813052   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:12.929] apps.sh:185: Successful get deploy test-nginx-extensions {{(index .spec.template.spec.containers 0).name}}: nginx
I0812 22:37:12.931] (BSuccessful
I0812 22:37:12.932] message:10
I0812 22:37:12.932] has not:2
I0812 22:37:13.032] Successful
I0812 22:37:13.033] message:apps/v1
... skipping 6 lines ...
I0812 22:37:13.445] (BSuccessful
I0812 22:37:13.446] message:10
I0812 22:37:13.446] has:10
I0812 22:37:13.545] Successful
I0812 22:37:13.546] message:apps/v1
I0812 22:37:13.546] has:apps/v1
W0812 22:37:13.647] E0812 22:37:13.472752   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:13.647] E0812 22:37:13.578473   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:13.716] E0812 22:37:13.716106   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 22:37:13.815] E0812 22:37:13.814732   53188 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 22:37:13.916] Successful describe rs:
I0812 22:37:13.917] Name:           test-nginx-apps-7fb7df9785
I0812 22:37:13.917] Namespace:      namespace-1565649432-25061
I0812 22:37:13.917] Selector:       app=test-nginx-apps,pod-template-hash=7fb7df9785
I0812 22:37:13.917] Labels:         app=test-nginx-apps
I0812 22:37:13.917]                 pod-template-hash=7fb7df9785
I0812 22:37:13.917] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0812 22:37:13.918]                 deployment.kubernetes.io/max-replicas: 2
I0812 22:37:13.918]                 deployment.kubernetes.io/revision: 1
I0812 22:37:13.918] Controlled By:  Deployment/test-nginx-apps
I0812 22:37:13.918] Replicas:       1 current / 1 desired
I0812 22:37:13.918] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0812 22:37:13.918] Pod Template:
I0812 22:37:13.918]   Labels:  app=test-nginx-apps
I0812 22:37:13.918]            pod-template-hash=7fb7df9785
I0812 22:37:13.919]   Containers:
I0812 22:37:13.919]    nginx:
I0812 22:37:13.919]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 34 lines ...
I0812 22:37:14.122] apps.sh:214: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 22:37:14.217] (Bdeployment.apps/nginx-with-command created
W0812 22:37:14.318] I0812 22:37:14.222189   53188 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565649432-25061", Name:"nginx-with-command", UID:"1728