This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 1 failed / 2898 succeeded
Started2019-10-17 13:30
Elapsed28m22s
Revision
Buildergke-prow-ssd-pool-1a225945-m4r6
Refs master:534051ac
82703:67a31641
pod079ada5e-f0e2-11e9-99c1-720b36104e5f
infra-commit1ed1b45c6
pod079ada5e-f0e2-11e9-99c1-720b36104e5f
repok8s.io/kubernetes
repo-commit9e91fa03e4ec7d07909a1ab5d60b1cf3b3f34346
repos{u'k8s.io/kubernetes': u'master:534051acec00ab0dcaea502cc3bf410ba32c7b27,82703:67a31641875c114c87533d4a097c185c912dbe30'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestSchedulerCreationFromConfigMap 4.30s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestSchedulerCreationFromConfigMap$
=== RUN   TestSchedulerCreationFromConfigMap
W1017 13:55:15.858281  108805 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1017 13:55:15.858297  108805 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I1017 13:55:15.858310  108805 master.go:305] Node port range unspecified. Defaulting to 30000-32767.
I1017 13:55:15.858320  108805 master.go:261] Using reconciler: 
I1017 13:55:15.860307  108805 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.860449  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.861227  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.862755  108805 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1017 13:55:15.862796  108805 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.862836  108805 reflector.go:185] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1017 13:55:15.863017  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.863032  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.864024  108805 watch_cache.go:451] Replace watchCache (rev: 42907) 
I1017 13:55:15.864374  108805 store.go:1342] Monitoring events count at <storage-prefix>//events
I1017 13:55:15.864424  108805 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.864627  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.864655  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.864722  108805 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1017 13:55:15.865486  108805 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1017 13:55:15.865535  108805 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.865627  108805 reflector.go:185] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1017 13:55:15.865681  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.865699  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.865865  108805 watch_cache.go:451] Replace watchCache (rev: 42907) 
I1017 13:55:15.866362  108805 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1017 13:55:15.866538  108805 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.866674  108805 reflector.go:185] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1017 13:55:15.866755  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.866784  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.867114  108805 watch_cache.go:451] Replace watchCache (rev: 42907) 
I1017 13:55:15.868336  108805 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1017 13:55:15.868452  108805 watch_cache.go:451] Replace watchCache (rev: 42907) 
I1017 13:55:15.868512  108805 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.868707  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.868736  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.868746  108805 reflector.go:185] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1017 13:55:15.869440  108805 watch_cache.go:451] Replace watchCache (rev: 42907) 
I1017 13:55:15.869771  108805 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1017 13:55:15.869912  108805 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.870012  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.870031  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.870054  108805 reflector.go:185] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1017 13:55:15.871054  108805 watch_cache.go:451] Replace watchCache (rev: 42907) 
I1017 13:55:15.871182  108805 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1017 13:55:15.871291  108805 reflector.go:185] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1017 13:55:15.871319  108805 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.871416  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.871429  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.872159  108805 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1017 13:55:15.872305  108805 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.872532  108805 reflector.go:185] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1017 13:55:15.872817  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.872839  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.872425  108805 watch_cache.go:451] Replace watchCache (rev: 42907) 
I1017 13:55:15.873611  108805 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1017 13:55:15.873790  108805 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.873838  108805 reflector.go:185] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1017 13:55:15.873939  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.873962  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.874793  108805 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1017 13:55:15.874894  108805 reflector.go:185] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1017 13:55:15.874939  108805 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.875045  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.875064  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.875308  108805 watch_cache.go:451] Replace watchCache (rev: 42907) 
I1017 13:55:15.876575  108805 watch_cache.go:451] Replace watchCache (rev: 42907) 
I1017 13:55:15.877056  108805 watch_cache.go:451] Replace watchCache (rev: 42907) 
I1017 13:55:15.878386  108805 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1017 13:55:15.878600  108805 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.878681  108805 reflector.go:185] Listing and watching *core.Node from storage/cacher.go:/minions
I1017 13:55:15.878763  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.878789  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.879791  108805 watch_cache.go:451] Replace watchCache (rev: 42908) 
I1017 13:55:15.881302  108805 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1017 13:55:15.881496  108805 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.881678  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.881701  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.881792  108805 reflector.go:185] Listing and watching *core.Pod from storage/cacher.go:/pods
I1017 13:55:15.886925  108805 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1017 13:55:15.888860  108805 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.889113  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.889149  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.891963  108805 watch_cache.go:451] Replace watchCache (rev: 42909) 
I1017 13:55:15.892034  108805 reflector.go:185] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1017 13:55:15.893253  108805 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1017 13:55:15.893310  108805 reflector.go:185] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1017 13:55:15.893311  108805 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.893511  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.893532  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.893663  108805 watch_cache.go:451] Replace watchCache (rev: 42912) 
I1017 13:55:15.893938  108805 watch_cache.go:451] Replace watchCache (rev: 42912) 
I1017 13:55:15.894224  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.894243  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.894794  108805 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.894933  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.894949  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.895590  108805 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1017 13:55:15.895614  108805 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1017 13:55:15.895853  108805 reflector.go:185] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1017 13:55:15.896020  108805 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.896201  108805 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.896826  108805 watch_cache.go:451] Replace watchCache (rev: 42913) 
I1017 13:55:15.896880  108805 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.897490  108805 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.898183  108805 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.899051  108805 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.903119  108805 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.903284  108805 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.903517  108805 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.904006  108805 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.904614  108805 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.904883  108805 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.905773  108805 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.906052  108805 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.906720  108805 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.907135  108805 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.908279  108805 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.908571  108805 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.908854  108805 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.909069  108805 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.909436  108805 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.909757  108805 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.909971  108805 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.910592  108805 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.910834  108805 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.911564  108805 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.912191  108805 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.912390  108805 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.912542  108805 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.913138  108805 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.913356  108805 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.913810  108805 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.914331  108805 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.915013  108805 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.915873  108805 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.916216  108805 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.916313  108805 master.go:453] Skipping disabled API group "auditregistration.k8s.io".
I1017 13:55:15.916328  108805 master.go:464] Enabling API group "authentication.k8s.io".
I1017 13:55:15.916341  108805 master.go:464] Enabling API group "authorization.k8s.io".
I1017 13:55:15.916545  108805 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.916732  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.916753  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.918081  108805 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1017 13:55:15.918131  108805 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1017 13:55:15.918404  108805 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.918576  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.918599  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.920040  108805 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1017 13:55:15.920109  108805 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1017 13:55:15.920374  108805 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.920514  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.920545  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.921214  108805 watch_cache.go:451] Replace watchCache (rev: 42920) 
I1017 13:55:15.921688  108805 watch_cache.go:451] Replace watchCache (rev: 42921) 
I1017 13:55:15.922075  108805 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1017 13:55:15.922100  108805 master.go:464] Enabling API group "autoscaling".
I1017 13:55:15.922184  108805 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1017 13:55:15.922306  108805 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.922481  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.922506  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.923285  108805 watch_cache.go:451] Replace watchCache (rev: 42921) 
I1017 13:55:15.924207  108805 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1017 13:55:15.924348  108805 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.924651  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.924658  108805 reflector.go:185] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1017 13:55:15.924684  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.926373  108805 watch_cache.go:451] Replace watchCache (rev: 42921) 
I1017 13:55:15.927284  108805 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1017 13:55:15.927311  108805 master.go:464] Enabling API group "batch".
I1017 13:55:15.927375  108805 reflector.go:185] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1017 13:55:15.927505  108805 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.927735  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.927758  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.928751  108805 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1017 13:55:15.928784  108805 watch_cache.go:451] Replace watchCache (rev: 42922) 
I1017 13:55:15.928816  108805 master.go:464] Enabling API group "certificates.k8s.io".
I1017 13:55:15.928997  108805 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.929029  108805 reflector.go:185] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1017 13:55:15.929186  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.929209  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.930463  108805 watch_cache.go:451] Replace watchCache (rev: 42922) 
I1017 13:55:15.930762  108805 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1017 13:55:15.930835  108805 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1017 13:55:15.930946  108805 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.931078  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.931100  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.931512  108805 watch_cache.go:451] Replace watchCache (rev: 42922) 
I1017 13:55:15.931690  108805 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1017 13:55:15.931713  108805 master.go:464] Enabling API group "coordination.k8s.io".
I1017 13:55:15.931727  108805 master.go:453] Skipping disabled API group "discovery.k8s.io".
I1017 13:55:15.931810  108805 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1017 13:55:15.931916  108805 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.932037  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.932079  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.935445  108805 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1017 13:55:15.935475  108805 master.go:464] Enabling API group "extensions".
I1017 13:55:15.935645  108805 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1017 13:55:15.935805  108805 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.935983  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.936013  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.936960  108805 watch_cache.go:451] Replace watchCache (rev: 42922) 
I1017 13:55:15.937205  108805 watch_cache.go:451] Replace watchCache (rev: 42922) 
I1017 13:55:15.937341  108805 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1017 13:55:15.937388  108805 reflector.go:185] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1017 13:55:15.937526  108805 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.937718  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.937738  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.938459  108805 watch_cache.go:451] Replace watchCache (rev: 42923) 
I1017 13:55:15.939179  108805 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1017 13:55:15.939206  108805 master.go:464] Enabling API group "networking.k8s.io".
I1017 13:55:15.939210  108805 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1017 13:55:15.939261  108805 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.939499  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.939524  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.940044  108805 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1017 13:55:15.940061  108805 master.go:464] Enabling API group "node.k8s.io".
I1017 13:55:15.940244  108805 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.940406  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.940410  108805 reflector.go:185] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1017 13:55:15.940426  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.941338  108805 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1017 13:55:15.941524  108805 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.941781  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.941799  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.941883  108805 reflector.go:185] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1017 13:55:15.941884  108805 watch_cache.go:451] Replace watchCache (rev: 42923) 
I1017 13:55:15.941901  108805 watch_cache.go:451] Replace watchCache (rev: 42923) 
I1017 13:55:15.943440  108805 watch_cache.go:451] Replace watchCache (rev: 42923) 
I1017 13:55:15.943720  108805 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1017 13:55:15.943760  108805 master.go:464] Enabling API group "policy".
I1017 13:55:15.943830  108805 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.943896  108805 reflector.go:185] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1017 13:55:15.944088  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.944120  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.944970  108805 watch_cache.go:451] Replace watchCache (rev: 42923) 
I1017 13:55:15.945178  108805 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1017 13:55:15.945333  108805 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.945450  108805 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1017 13:55:15.945474  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.945490  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.946693  108805 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1017 13:55:15.946751  108805 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.946772  108805 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1017 13:55:15.946869  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.946882  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.947801  108805 watch_cache.go:451] Replace watchCache (rev: 42924) 
I1017 13:55:15.948806  108805 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1017 13:55:15.948937  108805 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.948954  108805 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1017 13:55:15.949054  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.949069  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.949775  108805 watch_cache.go:451] Replace watchCache (rev: 42924) 
I1017 13:55:15.950390  108805 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1017 13:55:15.950478  108805 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.950596  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.950632  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.950707  108805 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1017 13:55:15.951286  108805 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1017 13:55:15.951374  108805 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1017 13:55:15.951433  108805 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.951542  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.951645  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.952328  108805 watch_cache.go:451] Replace watchCache (rev: 42925) 
I1017 13:55:15.952664  108805 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1017 13:55:15.952723  108805 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.952843  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.952868  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.952937  108805 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1017 13:55:15.953959  108805 watch_cache.go:451] Replace watchCache (rev: 42925) 
I1017 13:55:15.954989  108805 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1017 13:55:15.955095  108805 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1017 13:55:15.955123  108805 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.955238  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.955256  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.956181  108805 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1017 13:55:15.956238  108805 master.go:464] Enabling API group "rbac.authorization.k8s.io".
I1017 13:55:15.956513  108805 watch_cache.go:451] Replace watchCache (rev: 42925) 
I1017 13:55:15.956785  108805 watch_cache.go:451] Replace watchCache (rev: 42925) 
I1017 13:55:15.957914  108805 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.958073  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.958092  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.958154  108805 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1017 13:55:15.961027  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.961287  108805 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1017 13:55:15.961421  108805 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.961592  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.961612  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.961673  108805 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1017 13:55:15.962583  108805 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1017 13:55:15.962600  108805 master.go:464] Enabling API group "scheduling.k8s.io".
I1017 13:55:15.962639  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.962662  108805 master.go:453] Skipping disabled API group "settings.k8s.io".
I1017 13:55:15.962786  108805 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.962804  108805 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1017 13:55:15.962871  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.962883  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.964350  108805 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1017 13:55:15.964401  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.964421  108805 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1017 13:55:15.964523  108805 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.964954  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.964973  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.965122  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.965702  108805 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1017 13:55:15.965763  108805 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.965795  108805 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1017 13:55:15.965891  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.965913  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.966688  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.967287  108805 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1017 13:55:15.967339  108805 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.967455  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.967474  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.967496  108805 reflector.go:185] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1017 13:55:15.967657  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.968899  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.970215  108805 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1017 13:55:15.970382  108805 reflector.go:185] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1017 13:55:15.970401  108805 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.970514  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.970536  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.971052  108805 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1017 13:55:15.971108  108805 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1017 13:55:15.971218  108805 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.971381  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.971402  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.972291  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.972449  108805 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1017 13:55:15.972471  108805 master.go:464] Enabling API group "storage.k8s.io".
I1017 13:55:15.972539  108805 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1017 13:55:15.972666  108805 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.972807  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.972827  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.972885  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.973897  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.974439  108805 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1017 13:55:15.974517  108805 reflector.go:185] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1017 13:55:15.974582  108805 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.974722  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.974740  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.975146  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.975691  108805 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1017 13:55:15.975814  108805 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.975843  108805 reflector.go:185] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1017 13:55:15.975902  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.975913  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.980084  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.980295  108805 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1017 13:55:15.980360  108805 reflector.go:185] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1017 13:55:15.980464  108805 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.980606  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.980629  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.981365  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.994854  108805 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1017 13:55:15.995026  108805 reflector.go:185] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1017 13:55:15.995098  108805 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.995295  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.995319  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.997011  108805 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1017 13:55:15.997175  108805 master.go:464] Enabling API group "apps".
I1017 13:55:15.997086  108805 reflector.go:185] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1017 13:55:15.997091  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.997262  108805 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.997521  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.997583  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.998243  108805 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1017 13:55:15.998314  108805 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1017 13:55:15.998301  108805 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:15.998396  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:15.998407  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:15.998433  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.999379  108805 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1017 13:55:15.999749  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:16.000893  108805 watch_cache.go:451] Replace watchCache (rev: 42926) 
I1017 13:55:15.999337  108805 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1017 13:55:16.002003  108805 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.002182  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:16.002196  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:16.005277  108805 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1017 13:55:16.005331  108805 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1017 13:55:16.005541  108805 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.005894  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:16.005911  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:16.007041  108805 watch_cache.go:451] Replace watchCache (rev: 42927) 
I1017 13:55:16.008246  108805 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1017 13:55:16.008266  108805 master.go:464] Enabling API group "admissionregistration.k8s.io".
I1017 13:55:16.008347  108805 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1017 13:55:16.008343  108805 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.008670  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:16.008691  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:16.009369  108805 store.go:1342] Monitoring events count at <storage-prefix>//events
I1017 13:55:16.009382  108805 master.go:464] Enabling API group "events.k8s.io".
I1017 13:55:16.009477  108805 watch_cache.go:451] Replace watchCache (rev: 42928) 
I1017 13:55:16.009666  108805 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.009834  108805 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1017 13:55:16.009878  108805 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.010147  108805 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.010282  108805 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.010388  108805 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.010579  108805 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.010807  108805 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.010884  108805 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.010952  108805 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.011045  108805 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.011170  108805 watch_cache.go:451] Replace watchCache (rev: 42928) 
I1017 13:55:16.011964  108805 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.012180  108805 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.012924  108805 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.013207  108805 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.013808  108805 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.014028  108805 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.014773  108805 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.014941  108805 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.015492  108805 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.015695  108805 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 13:55:16.015733  108805 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1017 13:55:16.016191  108805 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.016286  108805 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.016438  108805 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.017142  108805 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.017676  108805 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.018449  108805 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.018694  108805 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.019682  108805 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.020230  108805 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.020624  108805 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.021215  108805 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 13:55:16.021273  108805 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1017 13:55:16.021922  108805 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.022229  108805 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.022721  108805 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.023218  108805 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.023612  108805 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.024190  108805 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.024987  108805 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.025515  108805 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.025962  108805 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.026698  108805 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.027345  108805 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 13:55:16.027407  108805 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1017 13:55:16.028071  108805 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.028648  108805 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 13:55:16.028712  108805 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1017 13:55:16.029284  108805 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.029784  108805 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.029942  108805 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.030505  108805 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.030900  108805 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.031378  108805 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.032029  108805 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 13:55:16.032088  108805 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1017 13:55:16.033008  108805 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.033478  108805 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.033700  108805 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.034398  108805 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.034805  108805 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.035150  108805 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.035938  108805 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.036267  108805 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.036538  108805 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.037314  108805 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.037604  108805 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.037879  108805 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 13:55:16.037944  108805 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1017 13:55:16.037952  108805 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1017 13:55:16.038641  108805 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.039325  108805 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.040140  108805 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.040873  108805 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.041718  108805 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"a865eb3a-b5bf-43b3-a448-3f977e930731", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:55:16.045810  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.045830  108805 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1017 13:55:16.045838  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.045845  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.045851  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.045857  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.045888  108805 httplog.go:90] GET /healthz: (181.889µs) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:16.047093  108805 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.383451ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38986]
I1017 13:55:16.050931  108805 httplog.go:90] GET /api/v1/services: (1.174953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38986]
I1017 13:55:16.055528  108805 httplog.go:90] GET /api/v1/services: (1.090927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38986]
I1017 13:55:16.058116  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.058141  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.058151  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.058158  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.058166  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.058191  108805 httplog.go:90] GET /healthz: (144.714µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.060022  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.95744ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38986]
I1017 13:55:16.061156  108805 httplog.go:90] GET /api/v1/services: (860.528µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38986]
I1017 13:55:16.061653  108805 httplog.go:90] GET /api/v1/services: (2.502041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.061927  108805 httplog.go:90] POST /api/v1/namespaces: (1.50546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39222]
I1017 13:55:16.063321  108805 httplog.go:90] GET /api/v1/namespaces/kube-public: (822.698µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.068874  108805 httplog.go:90] POST /api/v1/namespaces: (1.604304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.070188  108805 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (891.995µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.071757  108805 httplog.go:90] POST /api/v1/namespaces: (1.222185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.147297  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.147323  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.147332  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.147339  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.147345  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.147384  108805 httplog.go:90] GET /healthz: (222.994µs) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:16.158927  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.158958  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.158974  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.158984  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.158993  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.159023  108805 httplog.go:90] GET /healthz: (262.882µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.246920  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.246959  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.246973  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.246983  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.246991  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.247047  108805 httplog.go:90] GET /healthz: (278.558µs) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:16.259074  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.259128  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.259143  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.259154  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.259164  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.259203  108805 httplog.go:90] GET /healthz: (303.987µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.347114  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.347163  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.347176  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.347186  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.347195  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.347235  108805 httplog.go:90] GET /healthz: (273.997µs) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:16.358896  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.358937  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.358950  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.358960  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.358968  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.358996  108805 httplog.go:90] GET /healthz: (242.095µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.446913  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.446945  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.446960  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.446969  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.446978  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.447034  108805 httplog.go:90] GET /healthz: (285.267µs) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:16.458938  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.458970  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.458984  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.458995  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.459004  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.459035  108805 httplog.go:90] GET /healthz: (268.663µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.547092  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.547122  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.547135  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.547144  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.547153  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.547201  108805 httplog.go:90] GET /healthz: (285.351µs) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:16.558901  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.558932  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.558940  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.558955  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.558961  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.558992  108805 httplog.go:90] GET /healthz: (234.01µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.647072  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.647106  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.647139  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.647149  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.647157  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.647214  108805 httplog.go:90] GET /healthz: (351.93µs) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:16.659024  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.659077  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.659090  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.659099  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.659107  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.659176  108805 httplog.go:90] GET /healthz: (347.721µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.747040  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.747071  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.747085  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.747096  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.747104  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.747136  108805 httplog.go:90] GET /healthz: (255.979µs) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:16.758925  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.758953  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.758966  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.758975  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.758984  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.759036  108805 httplog.go:90] GET /healthz: (245.619µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.846974  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.847005  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.847014  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.847021  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.847027  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.847076  108805 httplog.go:90] GET /healthz: (260.727µs) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:16.859153  108805 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:55:16.859181  108805 client.go:357] parsed scheme: "endpoint"
I1017 13:55:16.859196  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.859208  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.859218  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.859235  108805 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.859259  108805 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:55:16.859269  108805 httplog.go:90] GET /healthz: (368.2µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:16.948104  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.948130  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.948140  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.948149  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.948200  108805 httplog.go:90] GET /healthz: (1.315306ms) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:16.960188  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:16.960214  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:16.960225  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:16.960242  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:16.960293  108805 httplog.go:90] GET /healthz: (1.403853ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.047151  108805 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.846852ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.047217  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.811371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38986]
I1017 13:55:17.048003  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.048022  108805 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:55:17.048032  108805 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:55:17.048040  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:55:17.048069  108805 httplog.go:90] GET /healthz: (1.243183ms) 0 [Go-http-client/1.1 127.0.0.1:46892]
I1017 13:55:17.049339  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.800779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.049434  108805 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.74197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38986]
I1017 13:55:17.049721  108805 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1017 13:55:17.050849  108805 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (955.727µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46892]
I1017 13:55:17.050868  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (969.562µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.051975  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (806.633µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46892]
I1017 13:55:17.052698  108805 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.403463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.052968  108805 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1017 13:55:17.052986  108805 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1017 13:55:17.053074  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.68744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.053374  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.053019ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46892]
I1017 13:55:17.055149  108805 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.64116ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.055233  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.269783ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46892]
I1017 13:55:17.056433  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (764.079µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.057818  108805 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.203078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.058100  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (768.033µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.059303  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (899.743µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.059917  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.059947  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.059976  108805 httplog.go:90] GET /healthz: (1.28774ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.061874  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.763301ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.063818  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.629191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.064000  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1017 13:55:17.065216  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.01159ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.067383  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.703071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.067750  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1017 13:55:17.068725  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (783.209µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.071058  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.954911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.071593  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1017 13:55:17.072495  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (665.595µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.074194  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.285092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.074363  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1017 13:55:17.077340  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.563531ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.080275  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.4777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.080524  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1017 13:55:17.082444  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (679.637µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.084363  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.5133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.084692  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1017 13:55:17.085513  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (684.296µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.086902  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.121057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.087116  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1017 13:55:17.088327  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (959.002µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.090069  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.312815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.090326  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1017 13:55:17.091524  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (830.069µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.095790  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.907068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.096264  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1017 13:55:17.097435  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.037756ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.099975  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.229872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.100298  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1017 13:55:17.101789  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.290719ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.104012  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.772698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.104274  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1017 13:55:17.106423  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.907769ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.108924  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.079014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.109291  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1017 13:55:17.111454  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.958676ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.113588  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.730932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.113802  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1017 13:55:17.115322  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.078843ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.117172  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.396907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.117387  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1017 13:55:17.118408  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (771.108µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.120057  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.224098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.120268  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1017 13:55:17.121483  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (969.387µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.123279  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.323307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.123475  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1017 13:55:17.124463  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (714.443µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.126425  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.289357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.126648  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1017 13:55:17.127733  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (868.95µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.129820  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.739267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.130038  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1017 13:55:17.131036  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (785.664µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.133273  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.786874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.133451  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1017 13:55:17.134488  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (761.257µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.136796  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.725161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.136985  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1017 13:55:17.138027  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (854.098µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.140140  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.597592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.140408  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1017 13:55:17.141439  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (703.265µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.143375  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.493763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.143713  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1017 13:55:17.144783  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (799.083µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.146210  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.156057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.146697  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1017 13:55:17.147453  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.147488  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.147518  108805 httplog.go:90] GET /healthz: (863.224µs) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:17.147839  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (942.792µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.150278  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.041754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.150453  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1017 13:55:17.151718  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (860.546µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.154181  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.945428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.154493  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1017 13:55:17.158361  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (3.52742ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.159969  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.160007  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.160045  108805 httplog.go:90] GET /healthz: (1.248705ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.162257  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.069739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.162737  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1017 13:55:17.164062  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.092976ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.167080  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.387005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.167486  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1017 13:55:17.168732  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (980.799µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.171451  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.276913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.171959  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1017 13:55:17.175098  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (2.880715ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.178896  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.926328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.179155  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1017 13:55:17.180755  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.364791ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.184051  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.797837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.184366  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1017 13:55:17.185846  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.172285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.191231  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.813322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.191666  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1017 13:55:17.193499  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.266382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.196439  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.479522ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.196973  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1017 13:55:17.199364  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (2.099662ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.204447  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.344363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.205364  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1017 13:55:17.212004  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (6.187611ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.214737  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.957772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.214990  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1017 13:55:17.216583  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.335502ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.219522  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.973042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.221612  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1017 13:55:17.222910  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.124079ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.224952  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.366146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.225166  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1017 13:55:17.226372  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (848.843µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.228733  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.813265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.228946  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1017 13:55:17.230994  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.877934ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.237427  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.02163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.237779  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1017 13:55:17.239318  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.265404ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.242813  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.995047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.243191  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1017 13:55:17.246943  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (3.295239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.250249  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.250287  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.250338  108805 httplog.go:90] GET /healthz: (2.49331ms) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:17.251999  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.83596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.252943  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1017 13:55:17.256356  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (3.152626ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.259740  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.553073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.260109  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1017 13:55:17.260818  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.260855  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.260893  108805 httplog.go:90] GET /healthz: (2.129015ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.262329  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.901295ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.265326  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.388798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.265546  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1017 13:55:17.266943  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.08606ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.269488  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.105203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.269869  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1017 13:55:17.272000  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.901039ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.275635  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.089369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.275929  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1017 13:55:17.278265  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (2.165014ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.281021  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.356746ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.281236  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1017 13:55:17.282542  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.146398ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.294839  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (11.903126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.295167  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1017 13:55:17.304468  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (8.996432ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.308025  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.769206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.308347  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1017 13:55:17.309743  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.080209ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.312254  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.957282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.312543  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1017 13:55:17.316136  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (3.219469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.322570  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.544011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.322819  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1017 13:55:17.324302  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.086458ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.327340  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.927613ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.327608  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1017 13:55:17.328988  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.133802ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.331857  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.254497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.332104  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1017 13:55:17.333316  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.029624ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.337824  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.728335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.338161  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1017 13:55:17.339951  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.557108ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.348581  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.711399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.349060  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.349090  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.349132  108805 httplog.go:90] GET /healthz: (2.372304ms) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:17.349745  108805 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1017 13:55:17.360113  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.360381  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.360593  108805 httplog.go:90] GET /healthz: (1.617864ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.367968  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (2.107273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.389167  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.820374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.389455  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1017 13:55:17.412564  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (6.606002ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.428353  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.285456ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.428626  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1017 13:55:17.451085  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (5.393862ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.452602  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.452636  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.452671  108805 httplog.go:90] GET /healthz: (6.123716ms) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:17.460116  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.460135  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.460171  108805 httplog.go:90] GET /healthz: (1.389657ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.468006  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.364313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.468272  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1017 13:55:17.486602  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (891.405µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.507974  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.163602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.508293  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1017 13:55:17.527231  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.398658ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.548471  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.636141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.548849  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1017 13:55:17.548889  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.548912  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.548947  108805 httplog.go:90] GET /healthz: (2.235122ms) 0 [Go-http-client/1.1 127.0.0.1:46888]
I1017 13:55:17.559877  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.560104  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.560279  108805 httplog.go:90] GET /healthz: (1.424477ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.567138  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.587111ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.588515  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.477137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.588821  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1017 13:55:17.606851  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.116913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.627860  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.099343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.628032  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1017 13:55:17.646727  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.044927ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.650033  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.650059  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.650092  108805 httplog.go:90] GET /healthz: (2.181833ms) 0 [Go-http-client/1.1 127.0.0.1:46888]
I1017 13:55:17.659669  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.659701  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.659744  108805 httplog.go:90] GET /healthz: (954.68µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.669144  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.510661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.669419  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1017 13:55:17.686980  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.222821ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.707996  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.242815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.708239  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1017 13:55:17.726827  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.159221ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.747998  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.748016  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.260735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.748019  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.748079  108805 httplog.go:90] GET /healthz: (1.440016ms) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:17.748265  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1017 13:55:17.761527  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.761578  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.761620  108805 httplog.go:90] GET /healthz: (1.024588ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.766830  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.139964ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.788520  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.48177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.788789  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1017 13:55:17.808289  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (2.509175ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.828675  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.27655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.828877  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1017 13:55:17.846825  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.16157ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:17.847237  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.847256  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.847280  108805 httplog.go:90] GET /healthz: (630.463µs) 0 [Go-http-client/1.1 127.0.0.1:46888]
I1017 13:55:17.859533  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.859577  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.859614  108805 httplog.go:90] GET /healthz: (835.885µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.867579  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.592358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.867724  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1017 13:55:17.886847  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.144832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.907924  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.130146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.908378  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1017 13:55:17.927016  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.243929ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.948145  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.948190  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.948227  108805 httplog.go:90] GET /healthz: (1.3799ms) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:17.950335  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.625754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.950695  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1017 13:55:17.959489  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:17.959525  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:17.959587  108805 httplog.go:90] GET /healthz: (883.946µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.966664  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.019182ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.988248  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.540785ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:17.988515  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1017 13:55:18.006849  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.071489ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.027903  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.120922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.028142  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1017 13:55:18.047215  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.431052ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.047739  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.047766  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.047857  108805 httplog.go:90] GET /healthz: (1.082457ms) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:18.060003  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.060037  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.060075  108805 httplog.go:90] GET /healthz: (1.156667ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.068531  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.768206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.069841  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1017 13:55:18.087235  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.1975ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.109541  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.656726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.109909  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1017 13:55:18.127221  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.355148ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.149181  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.533567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.149442  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1017 13:55:18.149899  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.149937  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.149975  108805 httplog.go:90] GET /healthz: (3.336978ms) 0 [Go-http-client/1.1 127.0.0.1:46888]
I1017 13:55:18.159804  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.159837  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.159893  108805 httplog.go:90] GET /healthz: (1.124184ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.167460  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.698913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.188680  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.875767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.188944  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1017 13:55:18.207167  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.325376ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.228828  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.012657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.229144  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1017 13:55:18.248146  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.252662ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.248307  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.248359  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.248405  108805 httplog.go:90] GET /healthz: (1.531207ms) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:18.260525  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.260573  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.260672  108805 httplog.go:90] GET /healthz: (1.835678ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.268603  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.788108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.269145  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1017 13:55:18.287238  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.435961ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.310705  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.799253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.310996  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1017 13:55:18.326785  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.089273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.347795  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.347848  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.347899  108805 httplog.go:90] GET /healthz: (1.200705ms) 0 [Go-http-client/1.1 127.0.0.1:46888]
I1017 13:55:18.350785  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.979542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.351057  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1017 13:55:18.360216  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.360276  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.360325  108805 httplog.go:90] GET /healthz: (1.43182ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.366838  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.19811ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.387628  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.820942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.387862  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1017 13:55:18.407305  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.527139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.429075  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.304825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.429350  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1017 13:55:18.447070  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.2991ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.447341  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.447378  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.447409  108805 httplog.go:90] GET /healthz: (796.68µs) 0 [Go-http-client/1.1 127.0.0.1:46888]
I1017 13:55:18.461586  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.461620  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.461673  108805 httplog.go:90] GET /healthz: (2.831152ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.469335  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.594391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.469686  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1017 13:55:18.487813  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (2.013842ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.509499  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.395348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.509811  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1017 13:55:18.527007  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.224355ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.548357  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.548385  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.548422  108805 httplog.go:90] GET /healthz: (1.612547ms) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:18.548543  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.835686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.548791  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1017 13:55:18.560164  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.560207  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.560237  108805 httplog.go:90] GET /healthz: (1.072725ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.566933  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.256737ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.589358  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.469855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.589673  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1017 13:55:18.607594  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.747232ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.628325  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.584962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.628573  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1017 13:55:18.647289  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.611032ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.647719  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.647761  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.647803  108805 httplog.go:90] GET /healthz: (852.644µs) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:18.661061  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.661091  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.661145  108805 httplog.go:90] GET /healthz: (2.245142ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.667487  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.885902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.667742  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1017 13:55:18.686750  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (966.351µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.717319  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (11.426737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.717942  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1017 13:55:18.727218  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.441434ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.748709  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.748747  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.748801  108805 httplog.go:90] GET /healthz: (2.170173ms) 0 [Go-http-client/1.1 127.0.0.1:46888]
I1017 13:55:18.749778  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.59363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.750039  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1017 13:55:18.760153  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.760190  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.760233  108805 httplog.go:90] GET /healthz: (1.391089ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.767195  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.545404ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.789295  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.424061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.789767  108805 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1017 13:55:18.806783  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.072541ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.808355  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.180848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.827789  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.046732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.828177  108805 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1017 13:55:18.847864  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.847900  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.847946  108805 httplog.go:90] GET /healthz: (1.108338ms) 0 [Go-http-client/1.1 127.0.0.1:46888]
I1017 13:55:18.848435  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.540919ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.850110  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.066483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.859789  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.859839  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.859876  108805 httplog.go:90] GET /healthz: (1.106555ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.868744  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.566617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.869044  108805 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1017 13:55:18.887643  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.236219ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.889743  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.445456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.908712  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.2343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.909003  108805 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1017 13:55:18.927036  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.075684ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.928978  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.511813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.947700  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.923425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:18.947902  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.947922  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.947951  108805 httplog.go:90] GET /healthz: (1.259071ms) 0 [Go-http-client/1.1 127.0.0.1:46888]
I1017 13:55:18.947988  108805 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1017 13:55:18.961372  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:18.961406  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:18.961464  108805 httplog.go:90] GET /healthz: (2.542178ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.966814  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.04109ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.970503  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.183452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.987636  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.961595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:18.987958  108805 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1017 13:55:19.006741  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.028035ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:19.008476  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.281883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:19.027665  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.930923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:19.028056  108805 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1017 13:55:19.046756  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.065357ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:19.048817  108805 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.397209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:19.049003  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:19.049028  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:19.049058  108805 httplog.go:90] GET /healthz: (2.369149ms) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:19.059806  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:19.059838  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:19.059886  108805 httplog.go:90] GET /healthz: (954.574µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:19.067703  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.736066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:19.068048  108805 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1017 13:55:19.087084  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.251168ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:19.088854  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.312652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:19.401374  108805 request.go:538] Throttling request took 312.001344ms, request: POST:http://127.0.0.1:41553/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings
I1017 13:55:19.422179  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:19.422225  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:19.422308  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:19.422328  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:19.422372  108805 httplog.go:90] GET /healthz: (3.019251ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.422384  108805 httplog.go:90] GET /healthz: (3.408916ms) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:19.431172  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (6.360576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46888]
I1017 13:55:19.431859  108805 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1017 13:55:19.434087  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.570105ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.436353  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.875014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.439692  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.691708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.439918  108805 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1017 13:55:19.440968  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (806.794µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.442692  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.255664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.444964  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.714577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.445159  108805 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1017 13:55:19.446927  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.548969ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.447868  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:19.448001  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:19.448229  108805 httplog.go:90] GET /healthz: (1.512878ms) 0 [Go-http-client/1.1 127.0.0.1:38982]
I1017 13:55:19.450201  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.101291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.453145  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.415782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.453341  108805 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1017 13:55:19.481515  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (27.903944ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.484390  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.087561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.484711  108805 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:55:19.484735  108805 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:55:19.484765  108805 httplog.go:90] GET /healthz: (1.398932ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:19.486957  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.1367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.487250  108805 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1017 13:55:19.504767  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (17.284788ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.508238  108805 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.868651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.511422  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.363567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.512393  108805 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1017 13:55:19.513270  108805 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (570.71µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.514571  108805 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.045793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.516897  108805 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.800666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.517091  108805 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1017 13:55:19.547766  108805 httplog.go:90] GET /healthz: (942.847µs) 200 [Go-http-client/1.1 127.0.0.1:53146]
I1017 13:55:19.551703  108805 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.178167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
W1017 13:55:19.552096  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:55:19.552191  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:55:19.552205  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:55:19.552288  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:55:19.552313  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:55:19.552324  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:55:19.552450  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:55:19.552467  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:55:19.552532  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:55:19.552594  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:55:19.552615  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 13:55:19.554658  108805 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-0: (1.846831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.555030  108805 factory.go:291] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I1017 13:55:19.555057  108805 factory.go:308] Registering predicate: PredicateOne
I1017 13:55:19.555064  108805 algorithm_factory.go:288] Predicate type PredicateOne already registered, reusing.
I1017 13:55:19.555070  108805 factory.go:308] Registering predicate: PredicateTwo
I1017 13:55:19.555074  108805 algorithm_factory.go:288] Predicate type PredicateTwo already registered, reusing.
I1017 13:55:19.555079  108805 factory.go:323] Registering priority: PriorityOne
I1017 13:55:19.555084  108805 algorithm_factory.go:399] Priority type PriorityOne already registered, reusing.
I1017 13:55:19.555092  108805 factory.go:323] Registering priority: PriorityTwo
I1017 13:55:19.555096  108805 algorithm_factory.go:399] Priority type PriorityTwo already registered, reusing.
I1017 13:55:19.555102  108805 factory.go:369] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I1017 13:55:19.562376  108805 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (6.353505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.562627  108805 httplog.go:90] GET /healthz: (3.872518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
W1017 13:55:19.562893  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 13:55:19.566805  108805 httplog.go:90] GET /api/v1/namespaces/default: (3.682792ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:19.567153  108805 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-1: (3.978907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.567576  108805 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1017 13:55:19.567604  108805 factory.go:300] Using predicates from algorithm provider 'DefaultProvider'
I1017 13:55:19.567619  108805 factory.go:315] Using priorities from algorithm provider 'DefaultProvider'
I1017 13:55:19.567627  108805 factory.go:369] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1017 13:55:19.569636  108805 httplog.go:90] POST /api/v1/namespaces: (2.206554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:19.574672  108805 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (5.796498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
W1017 13:55:19.575035  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 13:55:19.578907  108805 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-2: (3.407263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.579225  108805 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1017 13:55:19.579256  108805 factory.go:369] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I1017 13:55:19.582352  108805 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.458305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
W1017 13:55:19.582735  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 13:55:19.584914  108805 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (14.817021ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:19.586038  108805 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-3: (2.889011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.586857  108805 factory.go:291] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I1017 13:55:19.586895  108805 factory.go:308] Registering predicate: PredicateOne
I1017 13:55:19.586905  108805 algorithm_factory.go:288] Predicate type PredicateOne already registered, reusing.
I1017 13:55:19.586913  108805 factory.go:308] Registering predicate: PredicateTwo
I1017 13:55:19.586919  108805 algorithm_factory.go:288] Predicate type PredicateTwo already registered, reusing.
I1017 13:55:19.586926  108805 factory.go:323] Registering priority: PriorityOne
I1017 13:55:19.586934  108805 algorithm_factory.go:399] Priority type PriorityOne already registered, reusing.
I1017 13:55:19.586946  108805 factory.go:323] Registering priority: PriorityTwo
I1017 13:55:19.586952  108805 algorithm_factory.go:399] Priority type PriorityTwo already registered, reusing.
I1017 13:55:19.586960  108805 factory.go:369] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I1017 13:55:19.592259  108805 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (4.231699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
W1017 13:55:19.592747  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 13:55:19.593293  108805 httplog.go:90] POST /api/v1/namespaces/default/services: (7.320299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:19.595222  108805 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.360848ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38982]
I1017 13:55:19.595349  108805 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-4: (1.993518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.595763  108805 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1017 13:55:19.595815  108805 factory.go:300] Using predicates from algorithm provider 'DefaultProvider'
I1017 13:55:19.595827  108805 factory.go:315] Using priorities from algorithm provider 'DefaultProvider'
I1017 13:55:19.595834  108805 factory.go:369] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1017 13:55:19.597953  108805 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.127932ms) 422 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
E1017 13:55:19.598406  108805 controller.go:227] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I1017 13:55:19.749344  108805 request.go:538] Throttling request took 152.95364ms, request: POST:http://127.0.0.1:41553/api/v1/namespaces/kube-system/configmaps
I1017 13:55:19.755964  108805 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.201225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
W1017 13:55:19.756251  108805 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 13:55:19.949541  108805 request.go:538] Throttling request took 193.13136ms, request: GET:http://127.0.0.1:41553/api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5
I1017 13:55:19.951646  108805 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5: (1.74962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:19.952011  108805 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1017 13:55:19.952044  108805 factory.go:369] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I1017 13:55:20.149484  108805 request.go:538] Throttling request took 197.103157ms, request: DELETE:http://127.0.0.1:41553/api/v1/nodes
I1017 13:55:20.152504  108805 httplog.go:90] DELETE /api/v1/nodes: (2.519396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I1017 13:55:20.152847  108805 controller.go:185] Shutting down kubernetes service endpoint reconciler
I1017 13:55:20.154683  108805 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.51268ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
--- FAIL: TestSchedulerCreationFromConfigMap (4.30s)
    scheduler_test.go:310: Expected predicates map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:310: Expected predicates map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}]
    scheduler_test.go:310: Expected predicates map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{}]
    scheduler_test.go:310: Expected predicates map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:310: Expected predicates map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}]
    scheduler_test.go:310: Expected predicates map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{}]

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20191017-134547.xml

Filter through log files | View test history on testgrid


Show 2898 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 556 lines ...
W1017 13:40:49.231] I1017 13:40:49.054859   53053 controllermanager.go:534] Started "daemonset"
W1017 13:40:49.231] I1017 13:40:49.054886   53053 daemon_controller.go:267] Starting daemon sets controller
W1017 13:40:49.231] I1017 13:40:49.055007   53053 shared_informer.go:197] Waiting for caches to sync for daemon sets
W1017 13:40:49.231] I1017 13:40:49.055463   53053 controllermanager.go:534] Started "ttl"
W1017 13:40:49.231] I1017 13:40:49.055587   53053 ttl_controller.go:116] Starting TTL controller
W1017 13:40:49.231] I1017 13:40:49.055664   53053 shared_informer.go:197] Waiting for caches to sync for TTL
W1017 13:40:49.231] E1017 13:40:49.055869   53053 core.go:79] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1017 13:40:49.231] W1017 13:40:49.055884   53053 controllermanager.go:526] Skipping "service"
W1017 13:40:49.232] I1017 13:40:49.056083   53053 node_lifecycle_controller.go:77] Sending events to api server
W1017 13:40:49.232] E1017 13:40:49.056106   53053 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W1017 13:40:49.232] W1017 13:40:49.056115   53053 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W1017 13:40:49.232] I1017 13:40:49.056463   53053 controllermanager.go:534] Started "persistentvolume-binder"
W1017 13:40:49.232] I1017 13:40:49.056822   53053 controllermanager.go:534] Started "podgc"
W1017 13:40:49.232] I1017 13:40:49.056985   53053 pv_controller_base.go:289] Starting persistent volume controller
W1017 13:40:49.232] I1017 13:40:49.057011   53053 shared_informer.go:197] Waiting for caches to sync for persistent volume
W1017 13:40:49.233] I1017 13:40:49.057082   53053 gc_controller.go:75] Starting GC controller
... skipping 110 lines ...
W1017 13:40:49.966] I1017 13:40:49.754286   53053 shared_informer.go:204] Caches are synced for ReplicaSet 
W1017 13:40:49.966] I1017 13:40:49.754655   53053 shared_informer.go:204] Caches are synced for certificate-csrapproving 
W1017 13:40:49.966] I1017 13:40:49.757195   53053 shared_informer.go:204] Caches are synced for GC 
W1017 13:40:49.966] I1017 13:40:49.761749   53053 shared_informer.go:204] Caches are synced for HPA 
W1017 13:40:49.967] I1017 13:40:49.772000   53053 shared_informer.go:204] Caches are synced for job 
W1017 13:40:49.967] I1017 13:40:49.775349   53053 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W1017 13:40:49.967] E1017 13:40:49.786720   53053 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W1017 13:40:49.967] I1017 13:40:49.794678   53053 shared_informer.go:204] Caches are synced for ReplicationController 
W1017 13:40:49.968] E1017 13:40:49.795878   53053 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W1017 13:40:49.968] I1017 13:40:49.795951   53053 shared_informer.go:204] Caches are synced for PV protection 
W1017 13:40:49.968] I1017 13:40:49.795980   53053 shared_informer.go:204] Caches are synced for PVC protection 
W1017 13:40:49.968] I1017 13:40:49.896366   53053 shared_informer.go:204] Caches are synced for deployment 
W1017 13:40:49.969] W1017 13:40:49.899200   53053 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W1017 13:40:49.969] I1017 13:40:49.955291   53053 shared_informer.go:204] Caches are synced for daemon sets 
W1017 13:40:49.969] I1017 13:40:49.957827   53053 shared_informer.go:204] Caches are synced for TTL 
W1017 13:40:49.969] I1017 13:40:49.960090   53053 shared_informer.go:204] Caches are synced for attach detach 
W1017 13:40:49.969] I1017 13:40:49.961405   53053 shared_informer.go:204] Caches are synced for taint 
W1017 13:40:49.970] I1017 13:40:49.961489   53053 node_lifecycle_controller.go:1282] Initializing eviction metric for zone: 
W1017 13:40:49.970] I1017 13:40:49.961683   53053 node_lifecycle_controller.go:1132] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
... skipping 73 lines ...
I1017 13:40:53.295] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:40:53.298] +++ command: run_RESTMapper_evaluation_tests
I1017 13:40:53.309] +++ [1017 13:40:53] Creating namespace namespace-1571319653-7592
I1017 13:40:53.381] namespace/namespace-1571319653-7592 created
I1017 13:40:53.450] Context "test" modified.
I1017 13:40:53.455] +++ [1017 13:40:53] Testing RESTMapper
I1017 13:40:53.559] +++ [1017 13:40:53] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1017 13:40:53.572] +++ exit code: 0
I1017 13:40:53.684] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1017 13:40:53.685] bindings                                                                      true         Binding
I1017 13:40:53.685] componentstatuses                 cs                                          false        ComponentStatus
I1017 13:40:53.685] configmaps                        cm                                          true         ConfigMap
I1017 13:40:53.685] endpoints                         ep                                          true         Endpoints
... skipping 317 lines ...
I1017 13:41:06.035] (Bcore.sh:79: Successful get pods/valid-pod {{.metadata.name}}: valid-pod
I1017 13:41:06.115] (Bcore.sh:81: Successful get pods {.items[*].metadata.name}: valid-pod
I1017 13:41:06.195] (Bcore.sh:82: Successful get pod valid-pod {.metadata.name}: valid-pod
I1017 13:41:06.282] (Bcore.sh:83: Successful get pod/valid-pod {.metadata.name}: valid-pod
I1017 13:41:06.367] (Bcore.sh:84: Successful get pods/valid-pod {.metadata.name}: valid-pod
I1017 13:41:06.463] (B
I1017 13:41:06.467] core.sh:86: FAIL!
I1017 13:41:06.468] Describe pods valid-pod
I1017 13:41:06.468]   Expected Match: Name:
I1017 13:41:06.468]   Not found in:
I1017 13:41:06.468] Name:         valid-pod
I1017 13:41:06.468] Namespace:    namespace-1571319665-25911
I1017 13:41:06.468] Priority:     0
... skipping 108 lines ...
I1017 13:41:06.768] QoS Class:        Guaranteed
I1017 13:41:06.768] Node-Selectors:   <none>
I1017 13:41:06.768] Tolerations:      <none>
I1017 13:41:06.768] Events:           <none>
I1017 13:41:06.769] (B
I1017 13:41:06.857] 
I1017 13:41:06.858] FAIL!
I1017 13:41:06.858] Describe pods
I1017 13:41:06.858]   Expected Match: Name:
I1017 13:41:06.859]   Not found in:
I1017 13:41:06.859] Name:         valid-pod
I1017 13:41:06.859] Namespace:    namespace-1571319665-25911
I1017 13:41:06.859] Priority:     0
... skipping 179 lines ...
I1017 13:41:12.884] (Bpoddisruptionbudget.policy/test-pdb-3 created
I1017 13:41:12.979] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1017 13:41:13.055] (Bpoddisruptionbudget.policy/test-pdb-4 created
I1017 13:41:13.152] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1017 13:41:13.323] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:41:13.503] (Bpod/env-test-pod created
W1017 13:41:13.604] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1017 13:41:13.604] error: setting 'all' parameter but found a non empty selector. 
W1017 13:41:13.605] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1017 13:41:13.605] I1017 13:41:12.547898   49514 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W1017 13:41:13.605] error: min-available and max-unavailable cannot be both specified
I1017 13:41:13.705] 
I1017 13:41:13.706] core.sh:264: FAIL!
I1017 13:41:13.706] Describe pods --namespace=test-kubectl-describe-pod env-test-pod
I1017 13:41:13.706]   Expected Match: TEST_CMD_1
I1017 13:41:13.706]   Not found in:
I1017 13:41:13.706] Name:         env-test-pod
I1017 13:41:13.706] Namespace:    test-kubectl-describe-pod
I1017 13:41:13.706] Priority:     0
... skipping 23 lines ...
I1017 13:41:13.709] Tolerations:       <none>
I1017 13:41:13.709] Events:            <none>
I1017 13:41:13.709] (B
I1017 13:41:13.709] 264 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
I1017 13:41:13.709] (B
I1017 13:41:13.711] 
I1017 13:41:13.711] FAIL!
I1017 13:41:13.711] Describe pods --namespace=test-kubectl-describe-pod
I1017 13:41:13.711]   Expected Match: TEST_CMD_1
I1017 13:41:13.711]   Not found in:
I1017 13:41:13.711] Name:         env-test-pod
I1017 13:41:13.711] Namespace:    test-kubectl-describe-pod
I1017 13:41:13.711] Priority:     0
... skipping 35 lines ...
I1017 13:41:14.116] namespace "test-kubectl-describe-pod" deleted
I1017 13:41:19.247] +++ [1017 13:41:19] Creating namespace namespace-1571319679-17823
I1017 13:41:19.336] namespace/namespace-1571319679-17823 created
I1017 13:41:19.407] Context "test" modified.
I1017 13:41:19.493] core.sh:278: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:41:19.648] (Bpod/valid-pod created
W1017 13:41:19.799] error: the path "test/e2e/testing-manifests/kubectl/redis-master-pod.yaml" does not exist
I1017 13:41:19.900] Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: redis-master:valid-pod:, got: valid-pod:
I1017 13:41:19.901] 
I1017 13:41:19.901] core.sh:283: FAIL!
I1017 13:41:19.901] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I1017 13:41:19.902]   Expected: redis-master:valid-pod:
I1017 13:41:19.902]   Got:      valid-pod:
I1017 13:41:19.902] (B
I1017 13:41:19.902] 53 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I1017 13:41:19.903] (B
I1017 13:41:19.989] Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: redis-master:valid-pod:, got: valid-pod:
I1017 13:41:19.990] 
I1017 13:41:19.994] core.sh:287: FAIL!
I1017 13:41:19.995] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I1017 13:41:19.995]   Expected: redis-master:valid-pod:
I1017 13:41:19.996]   Got:      valid-pod:
I1017 13:41:19.996] (B
I1017 13:41:19.996] 53 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I1017 13:41:19.996] (B
... skipping 8 lines ...
I1017 13:41:20.783] (Bcore.sh:304: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod:
I1017 13:41:20.865] (Bpod/valid-pod labeled
I1017 13:41:20.960] core.sh:308: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod:new-valid-pod:
I1017 13:41:21.058] (Bcore.sh:312: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod:new-valid-pod:
I1017 13:41:21.140] (Bpod/valid-pod labeled
W1017 13:41:21.241] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1017 13:41:21.242] Error from server (NotFound): pods "redis-master" not found
I1017 13:41:21.342] core.sh:316: Successful get pod valid-pod {{.metadata.labels.emptylabel}}: 
I1017 13:41:21.379] (Bcore.sh:320: Successful get pod valid-pod {{.metadata.annotations.emptyannotation}}: <no value>
I1017 13:41:21.461] (Bpod/valid-pod annotated
I1017 13:41:21.556] core.sh:324: Successful get pod valid-pod {{.metadata.annotations.emptyannotation}}: 
I1017 13:41:21.644] (Bcore.sh:328: Successful get pod valid-pod {{range.items}}{{.metadata.annotations}}:{{end}}: 
I1017 13:41:21.738] (Bpod/valid-pod labeled
... skipping 85 lines ...
I1017 13:41:26.775] (Bpod/valid-pod patched
I1017 13:41:26.864] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1017 13:41:26.935] (Bpod/valid-pod patched
I1017 13:41:27.027] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1017 13:41:27.182] (Bpod/valid-pod patched
I1017 13:41:27.282] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1017 13:41:27.449] (B+++ [1017 13:41:27] "kubectl patch with resourceVersion 495" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I1017 13:41:27.678] pod "valid-pod" deleted
I1017 13:41:27.687] pod/valid-pod replaced
I1017 13:41:27.778] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1017 13:41:27.924] (BSuccessful
I1017 13:41:27.924] message:error: --grace-period must have --force specified
I1017 13:41:27.925] has:\-\-grace-period must have \-\-force specified
I1017 13:41:28.066] Successful
I1017 13:41:28.067] message:error: --timeout must have --force specified
I1017 13:41:28.067] has:\-\-timeout must have \-\-force specified
I1017 13:41:28.216] node/node-v1-test created
W1017 13:41:28.317] W1017 13:41:28.215967   53053 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1017 13:41:28.418] node/node-v1-test replaced
I1017 13:41:28.464] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1017 13:41:28.539] (Bnode "node-v1-test" deleted
I1017 13:41:28.632] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1017 13:41:28.885] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I1017 13:41:29.820] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 66 lines ...
I1017 13:41:33.823] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:41:33.964] (Bpod/test-pod created
W1017 13:41:34.065] Edit cancelled, no changes made.
W1017 13:41:34.065] Edit cancelled, no changes made.
W1017 13:41:34.065] Edit cancelled, no changes made.
W1017 13:41:34.065] Edit cancelled, no changes made.
W1017 13:41:34.065] error: 'name' already has a value (valid-pod), and --overwrite is false
W1017 13:41:34.065] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1017 13:41:34.066] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1017 13:41:34.166] pod "test-pod" deleted
I1017 13:41:34.166] +++ [1017 13:41:34] Creating namespace namespace-1571319694-25722
I1017 13:41:34.216] namespace/namespace-1571319694-25722 created
I1017 13:41:34.299] Context "test" modified.
... skipping 41 lines ...
I1017 13:41:37.338] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1017 13:41:37.340] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:41:37.343] +++ command: run_kubectl_create_error_tests
I1017 13:41:37.353] +++ [1017 13:41:37] Creating namespace namespace-1571319697-27810
I1017 13:41:37.421] namespace/namespace-1571319697-27810 created
I1017 13:41:37.492] Context "test" modified.
I1017 13:41:37.498] +++ [1017 13:41:37] Testing kubectl create with error
W1017 13:41:37.598] Error: must specify one of -f and -k
W1017 13:41:37.598] 
W1017 13:41:37.599] Create a resource from a file or from stdin.
W1017 13:41:37.599] 
W1017 13:41:37.599]  JSON and YAML formats are accepted.
W1017 13:41:37.599] 
W1017 13:41:37.599] Examples:
... skipping 41 lines ...
W1017 13:41:37.606] 
W1017 13:41:37.607] Usage:
W1017 13:41:37.607]   kubectl create -f FILENAME [options]
W1017 13:41:37.607] 
W1017 13:41:37.607] Use "kubectl <command> --help" for more information about a given command.
W1017 13:41:37.607] Use "kubectl options" for a list of global command-line options (applies to all commands).
I1017 13:41:37.711] +++ [1017 13:41:37] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1017 13:41:37.812] kubectl convert is DEPRECATED and will be removed in a future version.
W1017 13:41:37.812] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1017 13:41:37.913] +++ exit code: 0
I1017 13:41:37.918] Recording: run_kubectl_apply_tests
I1017 13:41:37.919] Running command: run_kubectl_apply_tests
I1017 13:41:37.939] 
... skipping 16 lines ...
I1017 13:41:39.379] apply.sh:289: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I1017 13:41:39.455] (Bpod "test-pod" deleted
I1017 13:41:39.659] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W1017 13:41:39.925] I1017 13:41:39.924537   49514 client.go:357] parsed scheme: "endpoint"
W1017 13:41:39.925] I1017 13:41:39.924602   49514 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1017 13:41:39.929] I1017 13:41:39.928648   49514 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W1017 13:41:40.010] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1017 13:41:40.112] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I1017 13:41:40.112] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1017 13:41:40.114] +++ exit code: 0
I1017 13:41:40.144] Recording: run_kubectl_run_tests
I1017 13:41:40.144] Running command: run_kubectl_run_tests
I1017 13:41:40.165] 
... skipping 5 lines ...
I1017 13:41:40.337] Context "test" modified.
I1017 13:41:40.343] +++ [1017 13:41:40] Testing kubectl run
I1017 13:41:40.424] run.sh:29: Successful get jobs {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:41:40.511] (Bjob.batch/pi created
I1017 13:41:40.603] run.sh:33: Successful get jobs {{range.items}}{{.metadata.name}}:{{end}}: pi:
I1017 13:41:40.694] (B
I1017 13:41:40.694] FAIL!
I1017 13:41:40.694] Describe pods
I1017 13:41:40.694]   Expected Match: Name:
I1017 13:41:40.694]   Not found in:
I1017 13:41:40.694] Name:           pi-j94l4
I1017 13:41:40.695] Namespace:      namespace-1571319700-20625
I1017 13:41:40.695] Priority:       0
... skipping 84 lines ...
I1017 13:41:42.633] Context "test" modified.
I1017 13:41:42.639] +++ [1017 13:41:42] Testing kubectl create filter
I1017 13:41:42.724] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:41:42.876] (Bpod/selector-test-pod created
I1017 13:41:42.964] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1017 13:41:43.046] (BSuccessful
I1017 13:41:43.047] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1017 13:41:43.047] has:pods "selector-test-pod-dont-apply" not found
I1017 13:41:43.121] pod "selector-test-pod" deleted
I1017 13:41:43.139] +++ exit code: 0
I1017 13:41:43.171] Recording: run_kubectl_apply_deployments_tests
I1017 13:41:43.171] Running command: run_kubectl_apply_deployments_tests
I1017 13:41:43.194] 
... skipping 27 lines ...
I1017 13:41:44.859] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:41:44.933] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:41:45.015] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:41:45.178] (Bdeployment.apps/nginx created
I1017 13:41:45.279] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I1017 13:41:49.470] (BSuccessful
I1017 13:41:49.471] message:Error from server (Conflict): error when applying patch:
I1017 13:41:49.471] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1571319703-6767\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1017 13:41:49.472] to:
I1017 13:41:49.472] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I1017 13:41:49.472] Name: "nginx", Namespace: "namespace-1571319703-6767"
I1017 13:41:49.474] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1571319703-6767\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-10-17T13:41:45Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1571319703-6767" "resourceVersion":"587" "selfLink":"/apis/apps/v1/namespaces/namespace-1571319703-6767/deployments/nginx" "uid":"01baea9c-aa59-4aa9-b039-5192442f5079"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-10-17T13:41:45Z" "lastUpdateTime":"2019-10-17T13:41:45Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-10-17T13:41:45Z" "lastUpdateTime":"2019-10-17T13:41:45Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I1017 13:41:49.474] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I1017 13:41:49.475] has:Error from server (Conflict)
W1017 13:41:49.576] I1017 13:41:45.183145   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319703-6767", Name:"nginx", UID:"01baea9c-aa59-4aa9-b039-5192442f5079", APIVersion:"apps/v1", ResourceVersion:"574", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W1017 13:41:49.577] I1017 13:41:45.188325   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319703-6767", Name:"nginx-8484dd655", UID:"974242b3-bfa7-4bd2-8a24-99160354df64", APIVersion:"apps/v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-8cbg7
W1017 13:41:49.578] I1017 13:41:45.191688   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319703-6767", Name:"nginx-8484dd655", UID:"974242b3-bfa7-4bd2-8a24-99160354df64", APIVersion:"apps/v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-txqmt
W1017 13:41:49.578] I1017 13:41:45.192305   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319703-6767", Name:"nginx-8484dd655", UID:"974242b3-bfa7-4bd2-8a24-99160354df64", APIVersion:"apps/v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-btrkq
W1017 13:41:51.778] I1017 13:41:51.777851   53053 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1571319694-1978
I1017 13:41:54.676] deployment.apps/nginx configured
... skipping 146 lines ...
I1017 13:42:01.895] +++ [1017 13:42:01] Creating namespace namespace-1571319721-20932
I1017 13:42:01.986] namespace/namespace-1571319721-20932 created
I1017 13:42:02.056] Context "test" modified.
I1017 13:42:02.062] +++ [1017 13:42:02] Testing kubectl get
I1017 13:42:02.144] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:42:02.235] (BSuccessful
I1017 13:42:02.235] message:Error from server (NotFound): pods "abc" not found
I1017 13:42:02.235] has:pods "abc" not found
I1017 13:42:02.336] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:42:02.425] (BSuccessful
I1017 13:42:02.425] message:Error from server (NotFound): pods "abc" not found
I1017 13:42:02.425] has:pods "abc" not found
I1017 13:42:02.506] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:42:02.588] (BSuccessful
I1017 13:42:02.588] message:{
I1017 13:42:02.588]     "apiVersion": "v1",
I1017 13:42:02.589]     "items": [],
... skipping 23 lines ...
I1017 13:42:02.936] has not:No resources found
I1017 13:42:03.016] Successful
I1017 13:42:03.016] message:NAME
I1017 13:42:03.016] has not:No resources found
I1017 13:42:03.100] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:42:03.197] (BSuccessful
I1017 13:42:03.197] message:error: the server doesn't have a resource type "foobar"
I1017 13:42:03.197] has not:No resources found
I1017 13:42:03.281] Successful
I1017 13:42:03.281] message:No resources found in namespace-1571319721-20932 namespace.
I1017 13:42:03.281] has:No resources found
I1017 13:42:03.360] Successful
I1017 13:42:03.361] message:
I1017 13:42:03.361] has not:No resources found
I1017 13:42:03.442] Successful
I1017 13:42:03.442] message:No resources found in namespace-1571319721-20932 namespace.
I1017 13:42:03.443] has:No resources found
I1017 13:42:03.523] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:42:03.603] (BSuccessful
I1017 13:42:03.604] message:Error from server (NotFound): pods "abc" not found
I1017 13:42:03.604] has:pods "abc" not found
I1017 13:42:03.605] FAIL!
I1017 13:42:03.605] message:Error from server (NotFound): pods "abc" not found
I1017 13:42:03.605] has not:List
I1017 13:42:03.605] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1017 13:42:03.703] Successful
I1017 13:42:03.703] message:I1017 13:42:03.663419   62827 loader.go:375] Config loaded from file:  /tmp/tmp.aM3T9ox52p/.kube/config
I1017 13:42:03.704] I1017 13:42:03.664796   62827 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
I1017 13:42:03.704] I1017 13:42:03.682912   62827 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
... skipping 660 lines ...
I1017 13:42:09.247] Successful
I1017 13:42:09.248] message:NAME    DATA   AGE
I1017 13:42:09.248] one     0      1s
I1017 13:42:09.248] three   0      0s
I1017 13:42:09.249] two     0      0s
I1017 13:42:09.249] STATUS    REASON          MESSAGE
I1017 13:42:09.250] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 13:42:09.250] has not:watch is only supported on individual resources
I1017 13:42:10.356] Successful
I1017 13:42:10.357] message:STATUS    REASON          MESSAGE
I1017 13:42:10.357] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 13:42:10.358] has not:watch is only supported on individual resources
I1017 13:42:10.362] +++ [1017 13:42:10] Creating namespace namespace-1571319730-23311
I1017 13:42:10.438] namespace/namespace-1571319730-23311 created
I1017 13:42:10.509] Context "test" modified.
I1017 13:42:10.599] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:42:10.743] (Bpod/valid-pod created
... skipping 56 lines ...
I1017 13:42:10.833] }
I1017 13:42:10.925] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:42:11.171] (B<no value>Successful
I1017 13:42:11.172] message:valid-pod:
I1017 13:42:11.172] has:valid-pod:
I1017 13:42:11.280] Successful
I1017 13:42:11.280] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1017 13:42:11.280] 	template was:
I1017 13:42:11.280] 		{.missing}
I1017 13:42:11.280] 	object given to jsonpath engine was:
I1017 13:42:11.282] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-10-17T13:42:10Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1571319730-23311", "resourceVersion":"689", "selfLink":"/api/v1/namespaces/namespace-1571319730-23311/pods/valid-pod", "uid":"fb26ccb1-3e48-424e-8441-9fa6b8d6c9ea"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I1017 13:42:11.282] has:missing is not found
I1017 13:42:11.372] Successful
I1017 13:42:11.372] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1017 13:42:11.372] 	template was:
I1017 13:42:11.372] 		{{.missing}}
I1017 13:42:11.372] 	raw data was:
I1017 13:42:11.373] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-10-17T13:42:10Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1571319730-23311","resourceVersion":"689","selfLink":"/api/v1/namespaces/namespace-1571319730-23311/pods/valid-pod","uid":"fb26ccb1-3e48-424e-8441-9fa6b8d6c9ea"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1017 13:42:11.373] 	object given to template engine was:
I1017 13:42:11.374] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-10-17T13:42:10Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1571319730-23311 resourceVersion:689 selfLink:/api/v1/namespaces/namespace-1571319730-23311/pods/valid-pod uid:fb26ccb1-3e48-424e-8441-9fa6b8d6c9ea] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I1017 13:42:11.374] has:map has no entry for key "missing"
W1017 13:42:11.474] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I1017 13:42:12.458] Successful
I1017 13:42:12.459] message:NAME        READY   STATUS    RESTARTS   AGE
I1017 13:42:12.459] valid-pod   0/1     Pending   0          1s
I1017 13:42:12.459] STATUS      REASON          MESSAGE
I1017 13:42:12.460] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 13:42:12.460] has:STATUS
I1017 13:42:12.461] Successful
I1017 13:42:12.461] message:NAME        READY   STATUS    RESTARTS   AGE
I1017 13:42:12.461] valid-pod   0/1     Pending   0          1s
I1017 13:42:12.461] STATUS      REASON          MESSAGE
I1017 13:42:12.462] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 13:42:12.462] has:valid-pod
I1017 13:42:13.561] Successful
I1017 13:42:13.562] message:pod/valid-pod
I1017 13:42:13.562] has not:STATUS
I1017 13:42:13.563] Successful
I1017 13:42:13.563] message:pod/valid-pod
... skipping 72 lines ...
I1017 13:42:14.653] status:
I1017 13:42:14.653]   phase: Pending
I1017 13:42:14.653]   qosClass: Guaranteed
I1017 13:42:14.653] ---
I1017 13:42:14.653] has:name: valid-pod
I1017 13:42:14.726] Successful
I1017 13:42:14.727] message:Error from server (NotFound): pods "invalid-pod" not found
I1017 13:42:14.727] has:"invalid-pod" not found
I1017 13:42:14.801] pod "valid-pod" deleted
I1017 13:42:14.886] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:42:15.041] (Bpod/redis-master created
I1017 13:42:15.044] pod/valid-pod created
I1017 13:42:15.138] Successful
... skipping 35 lines ...
I1017 13:42:16.224] +++ command: run_kubectl_exec_pod_tests
I1017 13:42:16.234] +++ [1017 13:42:16] Creating namespace namespace-1571319736-19427
I1017 13:42:16.323] namespace/namespace-1571319736-19427 created
I1017 13:42:16.396] Context "test" modified.
I1017 13:42:16.402] +++ [1017 13:42:16] Testing kubectl exec POD COMMAND
I1017 13:42:16.486] Successful
I1017 13:42:16.486] message:Error from server (NotFound): pods "abc" not found
I1017 13:42:16.487] has:pods "abc" not found
I1017 13:42:16.631] pod/test-pod created
I1017 13:42:16.724] Successful
I1017 13:42:16.725] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1017 13:42:16.725] has not:pods "test-pod" not found
I1017 13:42:16.726] Successful
I1017 13:42:16.726] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1017 13:42:16.727] has not:pod or type/name must be specified
I1017 13:42:16.806] pod "test-pod" deleted
I1017 13:42:16.823] +++ exit code: 0
I1017 13:42:16.852] Recording: run_kubectl_exec_resource_name_tests
I1017 13:42:16.853] Running command: run_kubectl_exec_resource_name_tests
I1017 13:42:16.872] 
... skipping 2 lines ...
I1017 13:42:16.879] +++ command: run_kubectl_exec_resource_name_tests
I1017 13:42:16.890] +++ [1017 13:42:16] Creating namespace namespace-1571319736-943
I1017 13:42:16.960] namespace/namespace-1571319736-943 created
I1017 13:42:17.029] Context "test" modified.
I1017 13:42:17.035] +++ [1017 13:42:17] Testing kubectl exec TYPE/NAME COMMAND
I1017 13:42:17.129] Successful
I1017 13:42:17.130] message:error: the server doesn't have a resource type "foo"
I1017 13:42:17.130] has:error:
I1017 13:42:17.219] Successful
I1017 13:42:17.220] message:Error from server (NotFound): deployments.apps "bar" not found
I1017 13:42:17.220] has:"bar" not found
I1017 13:42:17.375] pod/test-pod created
I1017 13:42:17.536] replicaset.apps/frontend created
W1017 13:42:17.636] I1017 13:42:17.538685   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319736-943", Name:"frontend", UID:"5e031bb0-14cd-486b-a193-ed54332d19cf", APIVersion:"apps/v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gr5jb
W1017 13:42:17.637] I1017 13:42:17.541041   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319736-943", Name:"frontend", UID:"5e031bb0-14cd-486b-a193-ed54332d19cf", APIVersion:"apps/v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kzdpw
W1017 13:42:17.637] I1017 13:42:17.542253   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319736-943", Name:"frontend", UID:"5e031bb0-14cd-486b-a193-ed54332d19cf", APIVersion:"apps/v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r5twz
I1017 13:42:17.738] configmap/test-set-env-config created
I1017 13:42:17.796] Successful
I1017 13:42:17.796] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I1017 13:42:17.796] has:not implemented
I1017 13:42:17.886] Successful
I1017 13:42:17.886] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1017 13:42:17.887] has not:not found
I1017 13:42:17.888] Successful
I1017 13:42:17.888] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1017 13:42:17.888] has not:pod or type/name must be specified
I1017 13:42:17.985] Successful
I1017 13:42:17.986] message:Error from server (BadRequest): pod frontend-gr5jb does not have a host assigned
I1017 13:42:17.986] has not:not found
I1017 13:42:17.987] Successful
I1017 13:42:17.987] message:Error from server (BadRequest): pod frontend-gr5jb does not have a host assigned
I1017 13:42:17.988] has not:pod or type/name must be specified
I1017 13:42:18.067] pod "test-pod" deleted
I1017 13:42:18.156] replicaset.apps "frontend" deleted
I1017 13:42:18.246] configmap "test-set-env-config" deleted
I1017 13:42:18.268] +++ exit code: 0
I1017 13:42:18.305] Recording: run_create_secret_tests
I1017 13:42:18.305] Running command: run_create_secret_tests
I1017 13:42:18.327] 
I1017 13:42:18.330] +++ Running case: test-cmd.run_create_secret_tests 
I1017 13:42:18.332] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:42:18.334] +++ command: run_create_secret_tests
I1017 13:42:18.427] Successful
I1017 13:42:18.427] message:Error from server (NotFound): secrets "mysecret" not found
I1017 13:42:18.427] has:secrets "mysecret" not found
I1017 13:42:18.576] Successful
I1017 13:42:18.576] message:Error from server (NotFound): secrets "mysecret" not found
I1017 13:42:18.576] has:secrets "mysecret" not found
I1017 13:42:18.578] Successful
I1017 13:42:18.578] message:user-specified
I1017 13:42:18.578] has:user-specified
I1017 13:42:18.648] Successful
I1017 13:42:18.722] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"85cb73ea-1d4e-42e2-b939-87b703a493ad","resourceVersion":"762","creationTimestamp":"2019-10-17T13:42:18Z"}}
... skipping 2 lines ...
I1017 13:42:18.897] has:uid
I1017 13:42:18.968] Successful
I1017 13:42:18.969] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"85cb73ea-1d4e-42e2-b939-87b703a493ad","resourceVersion":"763","creationTimestamp":"2019-10-17T13:42:18Z"},"data":{"key1":"config1"}}
I1017 13:42:18.969] has:config1
I1017 13:42:19.039] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"85cb73ea-1d4e-42e2-b939-87b703a493ad"}}
I1017 13:42:19.129] Successful
I1017 13:42:19.129] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I1017 13:42:19.129] has:configmaps "tester-update-cm" not found
I1017 13:42:19.140] +++ exit code: 0
I1017 13:42:19.173] Recording: run_kubectl_create_kustomization_directory_tests
I1017 13:42:19.173] Running command: run_kubectl_create_kustomization_directory_tests
I1017 13:42:19.196] 
I1017 13:42:19.199] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
I1017 13:42:21.817] valid-pod   0/1     Pending   0          0s
I1017 13:42:21.817] has:valid-pod
I1017 13:42:22.908] Successful
I1017 13:42:22.908] message:NAME        READY   STATUS    RESTARTS   AGE
I1017 13:42:22.909] valid-pod   0/1     Pending   0          0s
I1017 13:42:22.909] STATUS      REASON          MESSAGE
I1017 13:42:22.909] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 13:42:22.909] has:Timeout exceeded while reading body
I1017 13:42:22.987] Successful
I1017 13:42:22.987] message:NAME        READY   STATUS    RESTARTS   AGE
I1017 13:42:22.987] valid-pod   0/1     Pending   0          1s
I1017 13:42:22.987] has:valid-pod
I1017 13:42:23.055] Successful
I1017 13:42:23.056] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1017 13:42:23.056] has:Invalid timeout value
I1017 13:42:23.129] pod "valid-pod" deleted
I1017 13:42:23.146] +++ exit code: 0
I1017 13:42:23.179] Recording: run_crd_tests
I1017 13:42:23.179] Running command: run_crd_tests
I1017 13:42:23.202] 
... skipping 158 lines ...
I1017 13:42:27.914] foo.company.com/test patched
I1017 13:42:27.999] crd.sh:236: Successful get foos/test {{.patched}}: value1
I1017 13:42:28.079] (Bfoo.company.com/test patched
I1017 13:42:28.168] crd.sh:238: Successful get foos/test {{.patched}}: value2
I1017 13:42:28.251] (Bfoo.company.com/test patched
I1017 13:42:28.349] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I1017 13:42:28.506] (B+++ [1017 13:42:28] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1017 13:42:28.573] {
I1017 13:42:28.573]     "apiVersion": "company.com/v1",
I1017 13:42:28.573]     "kind": "Foo",
I1017 13:42:28.574]     "metadata": {
I1017 13:42:28.574]         "annotations": {
I1017 13:42:28.574]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 194 lines ...
I1017 13:42:58.480] (Bnamespace/non-native-resources created
I1017 13:42:58.644] bar.company.com/test created
I1017 13:42:58.737] crd.sh:455: Successful get bars {{len .items}}: 1
I1017 13:42:58.813] (Bnamespace "non-native-resources" deleted
I1017 13:43:04.019] crd.sh:458: Successful get bars {{len .items}}: 0
I1017 13:43:04.185] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W1017 13:43:04.285] Error from server (NotFound): namespaces "non-native-resources" not found
I1017 13:43:04.386] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I1017 13:43:04.399] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1017 13:43:04.489] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I1017 13:43:04.520] +++ exit code: 0
I1017 13:43:04.551] Recording: run_cmd_with_img_tests
I1017 13:43:04.552] Running command: run_cmd_with_img_tests
... skipping 7 lines ...
I1017 13:43:04.734] +++ [1017 13:43:04] Testing cmd with image
I1017 13:43:04.822] Successful
I1017 13:43:04.823] message:deployment.apps/test1 created
I1017 13:43:04.823] has:deployment.apps/test1 created
I1017 13:43:04.899] deployment.apps "test1" deleted
I1017 13:43:04.978] Successful
I1017 13:43:04.979] message:error: Invalid image name "InvalidImageName": invalid reference format
I1017 13:43:04.979] has:error: Invalid image name "InvalidImageName": invalid reference format
I1017 13:43:04.991] +++ exit code: 0
I1017 13:43:05.025] +++ [1017 13:43:05] Testing recursive resources
I1017 13:43:05.030] +++ [1017 13:43:05] Creating namespace namespace-1571319785-5203
W1017 13:43:05.131] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1017 13:43:05.132] I1017 13:43:04.811924   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319784-14248", Name:"test1", UID:"7624043d-a926-4f53-9660-bfef54d2e813", APIVersion:"apps/v1", ResourceVersion:"923", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-6cdffdb5b8 to 1
W1017 13:43:05.132] I1017 13:43:04.817624   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319784-14248", Name:"test1-6cdffdb5b8", UID:"0a5a62bd-df71-432c-8779-9e569201e10c", APIVersion:"apps/v1", ResourceVersion:"924", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-fl6pf
W1017 13:43:05.193] W1017 13:43:05.192885   49514 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1017 13:43:05.195] E1017 13:43:05.194722   53053 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:05.296] namespace/namespace-1571319785-5203 created
I1017 13:43:05.296] Context "test" modified.
I1017 13:43:05.335] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:43:05.596] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:05.598] (BSuccessful
I1017 13:43:05.598] message:pod/busybox0 created
I1017 13:43:05.598] pod/busybox1 created
I1017 13:43:05.598] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1017 13:43:05.599] has:error validating data: kind not set
I1017 13:43:05.684] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:05.847] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1017 13:43:05.849] (BSuccessful
I1017 13:43:05.850] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:43:05.850] has:Object 'Kind' is missing
I1017 13:43:05.936] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:06.236] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1017 13:43:06.238] (BSuccessful
I1017 13:43:06.239] message:pod/busybox0 replaced
I1017 13:43:06.239] pod/busybox1 replaced
I1017 13:43:06.240] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1017 13:43:06.240] has:error validating data: kind not set
I1017 13:43:06.343] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:06.442] (BSuccessful
I1017 13:43:06.443] message:Name:         busybox0
I1017 13:43:06.443] Namespace:    namespace-1571319785-5203
I1017 13:43:06.443] Priority:     0
I1017 13:43:06.444] Node:         <none>
... skipping 159 lines ...
I1017 13:43:06.458] has:Object 'Kind' is missing
I1017 13:43:06.537] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:06.718] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1017 13:43:06.720] (BSuccessful
I1017 13:43:06.720] message:pod/busybox0 annotated
I1017 13:43:06.721] pod/busybox1 annotated
I1017 13:43:06.721] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:43:06.721] has:Object 'Kind' is missing
I1017 13:43:06.808] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:07.057] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1017 13:43:07.059] (BSuccessful
I1017 13:43:07.060] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1017 13:43:07.060] pod/busybox0 configured
I1017 13:43:07.060] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1017 13:43:07.061] pod/busybox1 configured
I1017 13:43:07.061] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1017 13:43:07.061] has:error validating data: kind not set
I1017 13:43:07.152] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:43:07.335] (Bdeployment.apps/nginx created
W1017 13:43:07.436] W1017 13:43:05.303525   49514 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1017 13:43:07.436] E1017 13:43:05.304998   53053 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:07.437] W1017 13:43:05.404072   49514 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1017 13:43:07.437] E1017 13:43:05.405778   53053 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:07.437] W1017 13:43:05.498191   49514 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1017 13:43:07.438] E1017 13:43:05.499389   53053 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:07.438] E1017 13:43:06.196306   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:07.438] E1017 13:43:06.306535   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:07.439] E1017 13:43:06.407350   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:07.439] E1017 13:43:06.500779   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:07.439] E1017 13:43:07.197946   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:07.440] E1017 13:43:07.307948   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:07.440] I1017 13:43:07.339714   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319785-5203", Name:"nginx", UID:"149ecc15-370f-4ab0-82d9-812a98386c1f", APIVersion:"apps/v1", ResourceVersion:"949", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W1017 13:43:07.441] I1017 13:43:07.345444   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319785-5203", Name:"nginx-f87d999f7", UID:"84b36041-1982-4f50-b9d8-b35a98976e76", APIVersion:"apps/v1", ResourceVersion:"950", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-scxcl
W1017 13:43:07.441] I1017 13:43:07.349382   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319785-5203", Name:"nginx-f87d999f7", UID:"84b36041-1982-4f50-b9d8-b35a98976e76", APIVersion:"apps/v1", ResourceVersion:"950", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-r6n27
W1017 13:43:07.441] I1017 13:43:07.349939   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319785-5203", Name:"nginx-f87d999f7", UID:"84b36041-1982-4f50-b9d8-b35a98976e76", APIVersion:"apps/v1", ResourceVersion:"950", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-fqd94
W1017 13:43:07.442] E1017 13:43:07.408610   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:07.502] E1017 13:43:07.502026   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:07.603] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1017 13:43:07.603] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:43:07.718] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I1017 13:43:07.721] (BSuccessful
I1017 13:43:07.721] message:apiVersion: extensions/v1beta1
I1017 13:43:07.721] kind: Deployment
... skipping 40 lines ...
I1017 13:43:07.795] deployment.apps "nginx" deleted
I1017 13:43:07.888] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:08.054] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:08.057] (BSuccessful
I1017 13:43:08.057] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1017 13:43:08.058] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1017 13:43:08.058] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:43:08.059] has:Object 'Kind' is missing
I1017 13:43:08.143] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:08.235] (BSuccessful
I1017 13:43:08.236] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:43:08.236] has:busybox0:busybox1:
I1017 13:43:08.238] Successful
I1017 13:43:08.239] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:43:08.239] has:Object 'Kind' is missing
I1017 13:43:08.334] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:08.419] (Bpod/busybox0 labeled
I1017 13:43:08.419] pod/busybox1 labeled
I1017 13:43:08.420] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:43:08.507] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1017 13:43:08.509] (BSuccessful
I1017 13:43:08.510] message:pod/busybox0 labeled
I1017 13:43:08.510] pod/busybox1 labeled
I1017 13:43:08.510] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:43:08.510] has:Object 'Kind' is missing
I1017 13:43:08.594] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:08.679] (Bpod/busybox0 patched
I1017 13:43:08.680] pod/busybox1 patched
I1017 13:43:08.681] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:43:08.772] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1017 13:43:08.774] (BSuccessful
I1017 13:43:08.775] message:pod/busybox0 patched
I1017 13:43:08.775] pod/busybox1 patched
I1017 13:43:08.775] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:43:08.776] has:Object 'Kind' is missing
I1017 13:43:08.862] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:09.043] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:43:09.045] (BSuccessful
I1017 13:43:09.046] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1017 13:43:09.046] pod "busybox0" force deleted
I1017 13:43:09.046] pod "busybox1" force deleted
I1017 13:43:09.047] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:43:09.047] has:Object 'Kind' is missing
I1017 13:43:09.132] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:43:09.298] (Breplicationcontroller/busybox0 created
I1017 13:43:09.305] replicationcontroller/busybox1 created
W1017 13:43:09.405] kubectl convert is DEPRECATED and will be removed in a future version.
W1017 13:43:09.406] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W1017 13:43:09.406] E1017 13:43:08.200021   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:09.407] E1017 13:43:08.309730   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:09.407] E1017 13:43:08.409475   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:09.408] E1017 13:43:08.503461   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:09.408] I1017 13:43:08.925358   53053 namespace_controller.go:185] Namespace has been deleted non-native-resources
W1017 13:43:09.409] E1017 13:43:09.201932   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:09.409] I1017 13:43:09.302405   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319785-5203", Name:"busybox0", UID:"60826c1a-b8cd-46f0-9cde-2ee06cd9d8ab", APIVersion:"v1", ResourceVersion:"980", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-7c2vp
W1017 13:43:09.410] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1017 13:43:09.411] I1017 13:43:09.310465   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319785-5203", Name:"busybox1", UID:"23a406b9-9b6d-49ab-820e-8d1427f91416", APIVersion:"v1", ResourceVersion:"982", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-jxs2f
W1017 13:43:09.411] E1017 13:43:09.311509   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:09.412] E1017 13:43:09.410709   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:09.505] E1017 13:43:09.505179   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:09.606] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:09.607] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:09.607] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I1017 13:43:09.681] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I1017 13:43:09.855] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1017 13:43:09.939] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1017 13:43:09.941] (BSuccessful
I1017 13:43:09.942] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1017 13:43:09.942] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1017 13:43:09.942] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:43:09.942] has:Object 'Kind' is missing
I1017 13:43:10.017] horizontalpodautoscaler.autoscaling "busybox0" deleted
I1017 13:43:10.098] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1017 13:43:10.196] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:10.295] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I1017 13:43:10.392] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I1017 13:43:10.572] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1017 13:43:10.654] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1017 13:43:10.656] (BSuccessful
I1017 13:43:10.656] message:service/busybox0 exposed
I1017 13:43:10.657] service/busybox1 exposed
I1017 13:43:10.657] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:43:10.657] has:Object 'Kind' is missing
I1017 13:43:10.745] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:10.840] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I1017 13:43:10.957] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I1017 13:43:11.160] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I1017 13:43:11.260] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I1017 13:43:11.262] (BSuccessful
I1017 13:43:11.263] message:replicationcontroller/busybox0 scaled
I1017 13:43:11.263] replicationcontroller/busybox1 scaled
I1017 13:43:11.264] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:43:11.264] has:Object 'Kind' is missing
W1017 13:43:11.365] E1017 13:43:10.203415   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:11.365] E1017 13:43:10.313263   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:11.366] E1017 13:43:10.412067   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:11.366] E1017 13:43:10.506431   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:11.367] I1017 13:43:11.053813   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319785-5203", Name:"busybox0", UID:"60826c1a-b8cd-46f0-9cde-2ee06cd9d8ab", APIVersion:"v1", ResourceVersion:"1001", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-rcxnr
W1017 13:43:11.367] I1017 13:43:11.063481   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319785-5203", Name:"busybox1", UID:"23a406b9-9b6d-49ab-820e-8d1427f91416", APIVersion:"v1", ResourceVersion:"1006", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-r7r9d
W1017 13:43:11.368] E1017 13:43:11.204844   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:11.368] E1017 13:43:11.315251   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:11.414] E1017 13:43:11.413497   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:11.508] E1017 13:43:11.507908   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:11.609] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:11.610] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:43:11.610] (BSuccessful
I1017 13:43:11.610] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1017 13:43:11.611] replicationcontroller "busybox0" force deleted
I1017 13:43:11.611] replicationcontroller "busybox1" force deleted
I1017 13:43:11.611] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:43:11.611] has:Object 'Kind' is missing
I1017 13:43:11.677] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:43:11.843] (Bdeployment.apps/nginx1-deployment created
I1017 13:43:11.847] deployment.apps/nginx0-deployment created
W1017 13:43:11.948] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1017 13:43:11.948] I1017 13:43:11.852496   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319785-5203", Name:"nginx0-deployment", UID:"19c4996d-fe87-4690-8902-64575943d83b", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W1017 13:43:11.949] I1017 13:43:11.854231   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319785-5203", Name:"nginx1-deployment", UID:"72b16896-4177-469d-8acb-b20315fd4701", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W1017 13:43:11.949] I1017 13:43:11.858124   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319785-5203", Name:"nginx0-deployment-57c6bff7f6", UID:"0957e7a3-5cf4-417d-8b00-59addf5f54a6", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-pmt27
W1017 13:43:11.949] I1017 13:43:11.858953   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319785-5203", Name:"nginx1-deployment-7bdbbfb5cf", UID:"c3b09d8d-b478-4465-acbc-c16e604ecc58", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-p8nks
W1017 13:43:11.950] I1017 13:43:11.867390   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319785-5203", Name:"nginx0-deployment-57c6bff7f6", UID:"0957e7a3-5cf4-417d-8b00-59addf5f54a6", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-q44p2
W1017 13:43:11.950] I1017 13:43:11.867760   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319785-5203", Name:"nginx1-deployment-7bdbbfb5cf", UID:"c3b09d8d-b478-4465-acbc-c16e604ecc58", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-kcgjs
I1017 13:43:12.051] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1017 13:43:12.059] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1017 13:43:12.271] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1017 13:43:12.274] (BSuccessful
I1017 13:43:12.275] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I1017 13:43:12.275] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I1017 13:43:12.275] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 13:43:12.276] has:Object 'Kind' is missing
W1017 13:43:12.376] E1017 13:43:12.206270   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:12.377] E1017 13:43:12.317074   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:12.415] E1017 13:43:12.414897   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:12.510] E1017 13:43:12.509401   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:12.610] deployment.apps/nginx1-deployment paused
I1017 13:43:12.611] deployment.apps/nginx0-deployment paused
I1017 13:43:12.611] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1017 13:43:12.611] (BSuccessful
I1017 13:43:12.612] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 13:43:12.612] has:Object 'Kind' is missing
... skipping 9 lines ...
I1017 13:43:12.795] 1         <none>
I1017 13:43:12.795] 
I1017 13:43:12.796] deployment.apps/nginx0-deployment 
I1017 13:43:12.796] REVISION  CHANGE-CAUSE
I1017 13:43:12.796] 1         <none>
I1017 13:43:12.796] 
I1017 13:43:12.796] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 13:43:12.797] has:nginx0-deployment
I1017 13:43:12.797] Successful
I1017 13:43:12.797] message:deployment.apps/nginx1-deployment 
I1017 13:43:12.797] REVISION  CHANGE-CAUSE
I1017 13:43:12.797] 1         <none>
I1017 13:43:12.798] 
I1017 13:43:12.798] deployment.apps/nginx0-deployment 
I1017 13:43:12.798] REVISION  CHANGE-CAUSE
I1017 13:43:12.798] 1         <none>
I1017 13:43:12.798] 
I1017 13:43:12.799] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 13:43:12.799] has:nginx1-deployment
I1017 13:43:12.800] Successful
I1017 13:43:12.800] message:deployment.apps/nginx1-deployment 
I1017 13:43:12.800] REVISION  CHANGE-CAUSE
I1017 13:43:12.800] 1         <none>
I1017 13:43:12.801] 
I1017 13:43:12.801] deployment.apps/nginx0-deployment 
I1017 13:43:12.801] REVISION  CHANGE-CAUSE
I1017 13:43:12.801] 1         <none>
I1017 13:43:12.801] 
I1017 13:43:12.802] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 13:43:12.802] has:Object 'Kind' is missing
I1017 13:43:12.883] deployment.apps "nginx1-deployment" force deleted
I1017 13:43:12.887] deployment.apps "nginx0-deployment" force deleted
W1017 13:43:12.988] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1017 13:43:12.989] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W1017 13:43:13.208] E1017 13:43:13.207844   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:13.319] E1017 13:43:13.318429   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:13.418] E1017 13:43:13.417643   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:13.511] E1017 13:43:13.510830   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:13.992] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:43:14.141] (Breplicationcontroller/busybox0 created
I1017 13:43:14.144] replicationcontroller/busybox1 created
I1017 13:43:14.251] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:43:14.354] (BSuccessful
I1017 13:43:14.354] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I1017 13:43:14.356] message:no rollbacker has been implemented for "ReplicationController"
I1017 13:43:14.356] no rollbacker has been implemented for "ReplicationController"
I1017 13:43:14.357] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:43:14.357] has:Object 'Kind' is missing
I1017 13:43:14.449] Successful
I1017 13:43:14.450] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:43:14.451] error: replicationcontrollers "busybox0" pausing is not supported
I1017 13:43:14.451] error: replicationcontrollers "busybox1" pausing is not supported
I1017 13:43:14.451] has:Object 'Kind' is missing
I1017 13:43:14.455] Successful
I1017 13:43:14.456] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:43:14.456] error: replicationcontrollers "busybox0" pausing is not supported
I1017 13:43:14.456] error: replicationcontrollers "busybox1" pausing is not supported
I1017 13:43:14.456] has:replicationcontrollers "busybox0" pausing is not supported
I1017 13:43:14.458] Successful
I1017 13:43:14.459] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:43:14.459] error: replicationcontrollers "busybox0" pausing is not supported
I1017 13:43:14.459] error: replicationcontrollers "busybox1" pausing is not supported
I1017 13:43:14.459] has:replicationcontrollers "busybox1" pausing is not supported
I1017 13:43:14.547] Successful
I1017 13:43:14.548] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:43:14.548] error: replicationcontrollers "busybox0" resuming is not supported
I1017 13:43:14.548] error: replicationcontrollers "busybox1" resuming is not supported
I1017 13:43:14.548] has:Object 'Kind' is missing
I1017 13:43:14.550] Successful
I1017 13:43:14.550] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:43:14.550] error: replicationcontrollers "busybox0" resuming is not supported
I1017 13:43:14.551] error: replicationcontrollers "busybox1" resuming is not supported
I1017 13:43:14.551] has:replicationcontrollers "busybox0" resuming is not supported
I1017 13:43:14.552] Successful
I1017 13:43:14.552] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:43:14.552] error: replicationcontrollers "busybox0" resuming is not supported
I1017 13:43:14.553] error: replicationcontrollers "busybox1" resuming is not supported
I1017 13:43:14.553] has:replicationcontrollers "busybox1" resuming is not supported
I1017 13:43:14.632] replicationcontroller "busybox0" force deleted
I1017 13:43:14.637] replicationcontroller "busybox1" force deleted
W1017 13:43:14.738] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1017 13:43:14.738] I1017 13:43:14.144832   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319785-5203", Name:"busybox0", UID:"a3c8c5b5-c370-4acb-b682-eea30572b9dd", APIVersion:"v1", ResourceVersion:"1071", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-77hdq
W1017 13:43:14.738] I1017 13:43:14.148411   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319785-5203", Name:"busybox1", UID:"b74c445e-c398-490d-a722-dfd51c0c5d66", APIVersion:"v1", ResourceVersion:"1073", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-kwjgq
W1017 13:43:14.739] E1017 13:43:14.209380   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:14.739] E1017 13:43:14.319828   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:14.739] E1017 13:43:14.419894   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:14.739] E1017 13:43:14.512020   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:14.739] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1017 13:43:14.740] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W1017 13:43:15.211] E1017 13:43:15.210798   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:15.321] E1017 13:43:15.321275   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:15.421] E1017 13:43:15.421407   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:15.513] E1017 13:43:15.513281   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:15.648] Recording: run_namespace_tests
I1017 13:43:15.648] Running command: run_namespace_tests
I1017 13:43:15.670] 
I1017 13:43:15.673] +++ Running case: test-cmd.run_namespace_tests 
I1017 13:43:15.675] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:43:15.678] +++ command: run_namespace_tests
I1017 13:43:15.686] +++ [1017 13:43:15] Testing kubectl(v1:namespaces)
I1017 13:43:15.759] namespace/my-namespace created
I1017 13:43:15.855] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1017 13:43:15.929] (Bnamespace "my-namespace" deleted
W1017 13:43:16.213] E1017 13:43:16.212375   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:16.323] E1017 13:43:16.322739   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:16.423] E1017 13:43:16.423090   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:16.515] E1017 13:43:16.514505   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:17.214] E1017 13:43:17.213720   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:17.324] E1017 13:43:17.324142   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:17.425] E1017 13:43:17.424661   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:17.516] E1017 13:43:17.515669   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:18.216] E1017 13:43:18.215518   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:18.326] E1017 13:43:18.326037   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:18.426] E1017 13:43:18.425897   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:18.517] E1017 13:43:18.517308   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:19.217] E1017 13:43:19.216944   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:19.328] E1017 13:43:19.327697   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:19.427] E1017 13:43:19.427082   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:19.519] E1017 13:43:19.518635   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:20.218] E1017 13:43:20.218268   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:20.329] E1017 13:43:20.329248   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:20.429] E1017 13:43:20.428384   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:20.520] E1017 13:43:20.520054   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:21.032] namespace/my-namespace condition met
I1017 13:43:21.128] Successful
I1017 13:43:21.128] message:Error from server (NotFound): namespaces "my-namespace" not found
I1017 13:43:21.128] has: not found
I1017 13:43:21.218] namespace/my-namespace created
W1017 13:43:21.319] E1017 13:43:21.220298   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:21.332] E1017 13:43:21.331898   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:21.431] E1017 13:43:21.430499   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:21.523] E1017 13:43:21.522388   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:21.623] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1017 13:43:21.624] (BSuccessful
I1017 13:43:21.624] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I1017 13:43:21.624] namespace "kube-node-lease" deleted
I1017 13:43:21.624] namespace "my-namespace" deleted
I1017 13:43:21.624] namespace "namespace-1571319650-17502" deleted
... skipping 27 lines ...
I1017 13:43:21.628] namespace "namespace-1571319740-661" deleted
I1017 13:43:21.628] namespace "namespace-1571319741-16443" deleted
I1017 13:43:21.628] namespace "namespace-1571319743-23665" deleted
I1017 13:43:21.628] namespace "namespace-1571319744-6904" deleted
I1017 13:43:21.628] namespace "namespace-1571319784-14248" deleted
I1017 13:43:21.628] namespace "namespace-1571319785-5203" deleted
I1017 13:43:21.628] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I1017 13:43:21.628] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I1017 13:43:21.628] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I1017 13:43:21.629] has:warning: deleting cluster-scoped resources
I1017 13:43:21.629] Successful
I1017 13:43:21.629] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I1017 13:43:21.629] namespace "kube-node-lease" deleted
I1017 13:43:21.629] namespace "my-namespace" deleted
I1017 13:43:21.629] namespace "namespace-1571319650-17502" deleted
... skipping 27 lines ...
I1017 13:43:21.632] namespace "namespace-1571319740-661" deleted
I1017 13:43:21.632] namespace "namespace-1571319741-16443" deleted
I1017 13:43:21.632] namespace "namespace-1571319743-23665" deleted
I1017 13:43:21.632] namespace "namespace-1571319744-6904" deleted
I1017 13:43:21.632] namespace "namespace-1571319784-14248" deleted
I1017 13:43:21.632] namespace "namespace-1571319785-5203" deleted
I1017 13:43:21.632] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I1017 13:43:21.632] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I1017 13:43:21.632] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I1017 13:43:21.632] has:namespace "my-namespace" deleted
I1017 13:43:21.651] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I1017 13:43:21.726] (Bnamespace/other created
I1017 13:43:21.829] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I1017 13:43:21.915] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:43:22.070] (Bpod/valid-pod created
I1017 13:43:22.166] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:43:22.255] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:43:22.348] (BSuccessful
I1017 13:43:22.348] message:error: a resource cannot be retrieved by name across all namespaces
I1017 13:43:22.348] has:a resource cannot be retrieved by name across all namespaces
I1017 13:43:22.432] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:43:22.515] (Bpod "valid-pod" force deleted
I1017 13:43:22.604] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:43:22.676] (Bnamespace "other" deleted
W1017 13:43:22.777] E1017 13:43:22.222232   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:22.777] E1017 13:43:22.333913   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:22.778] E1017 13:43:22.431616   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:22.778] I1017 13:43:22.479496   53053 shared_informer.go:197] Waiting for caches to sync for garbage collector
W1017 13:43:22.778] I1017 13:43:22.479699   53053 shared_informer.go:204] Caches are synced for garbage collector 
W1017 13:43:22.778] I1017 13:43:22.489582   53053 shared_informer.go:197] Waiting for caches to sync for resource quota
W1017 13:43:22.778] I1017 13:43:22.490026   53053 shared_informer.go:204] Caches are synced for resource quota 
W1017 13:43:22.778] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1017 13:43:22.779] E1017 13:43:22.524430   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:23.224] E1017 13:43:23.223893   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:23.336] E1017 13:43:23.335431   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:23.433] E1017 13:43:23.432832   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:23.526] E1017 13:43:23.525821   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:24.226] E1017 13:43:24.225373   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:24.337] E1017 13:43:24.337130   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:24.434] E1017 13:43:24.434122   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:24.528] E1017 13:43:24.527693   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:24.759] I1017 13:43:24.759071   53053 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1571319785-5203
W1017 13:43:24.763] I1017 13:43:24.762738   53053 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1571319785-5203
W1017 13:43:25.227] E1017 13:43:25.226963   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:25.339] E1017 13:43:25.338369   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:25.436] E1017 13:43:25.435657   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:25.529] E1017 13:43:25.528970   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:26.229] E1017 13:43:26.228583   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:26.340] E1017 13:43:26.339831   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:26.437] E1017 13:43:26.436612   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:26.531] E1017 13:43:26.530703   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:27.230] E1017 13:43:27.230157   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:27.342] E1017 13:43:27.341516   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:27.438] E1017 13:43:27.438061   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:27.532] E1017 13:43:27.532028   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:27.782] +++ exit code: 0
I1017 13:43:27.814] Recording: run_secrets_test
I1017 13:43:27.814] Running command: run_secrets_test
I1017 13:43:27.836] 
I1017 13:43:27.839] +++ Running case: test-cmd.run_secrets_test 
I1017 13:43:27.843] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 58 lines ...
I1017 13:43:29.738] (Bsecret "test-secret" deleted
I1017 13:43:29.814] secret/test-secret created
I1017 13:43:29.901] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I1017 13:43:29.984] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I1017 13:43:30.057] (Bsecret "test-secret" deleted
W1017 13:43:30.158] I1017 13:43:28.069274   69020 loader.go:375] Config loaded from file:  /tmp/tmp.aM3T9ox52p/.kube/config
W1017 13:43:30.159] E1017 13:43:28.231790   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:30.159] E1017 13:43:28.342693   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:30.159] E1017 13:43:28.439573   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:30.159] E1017 13:43:28.533335   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:30.160] E1017 13:43:29.233213   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:30.160] E1017 13:43:29.344184   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:30.160] E1017 13:43:29.441238   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:30.161] E1017 13:43:29.534443   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:30.235] E1017 13:43:30.235179   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:30.336] secret/secret-string-data created
I1017 13:43:30.337] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I1017 13:43:30.388] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I1017 13:43:30.471] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I1017 13:43:30.542] (Bsecret "secret-string-data" deleted
I1017 13:43:30.629] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:43:30.783] (Bsecret "test-secret" deleted
I1017 13:43:30.864] namespace "test-secrets" deleted
W1017 13:43:30.965] E1017 13:43:30.346023   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:30.965] E1017 13:43:30.442541   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:30.965] E1017 13:43:30.535790   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:31.103] I1017 13:43:31.102752   53053 namespace_controller.go:185] Namespace has been deleted my-namespace
W1017 13:43:31.237] E1017 13:43:31.236737   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:31.348] E1017 13:43:31.347755   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:31.444] E1017 13:43:31.443844   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:31.537] E1017 13:43:31.537104   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:31.625] I1017 13:43:31.624730   53053 namespace_controller.go:185] Namespace has been deleted kube-node-lease
W1017 13:43:31.627] I1017 13:43:31.626572   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319665-25911
W1017 13:43:31.633] I1017 13:43:31.633301   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319653-7592
W1017 13:43:31.634] I1017 13:43:31.633301   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319669-25496
W1017 13:43:31.647] I1017 13:43:31.646669   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319669-20759
W1017 13:43:31.651] I1017 13:43:31.650501   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319656-25559
... skipping 20 lines ...
W1017 13:43:32.096] I1017 13:43:32.095845   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319703-6767
W1017 13:43:32.102] I1017 13:43:32.101747   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319740-2329
W1017 13:43:32.148] I1017 13:43:32.147610   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319736-943
W1017 13:43:32.203] I1017 13:43:32.202974   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319740-661
W1017 13:43:32.232] I1017 13:43:32.231833   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319741-16443
W1017 13:43:32.237] I1017 13:43:32.237060   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319744-6904
W1017 13:43:32.239] E1017 13:43:32.238764   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:32.241] I1017 13:43:32.240816   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319743-23665
W1017 13:43:32.246] I1017 13:43:32.246159   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319784-14248
W1017 13:43:32.323] I1017 13:43:32.323186   53053 namespace_controller.go:185] Namespace has been deleted namespace-1571319785-5203
W1017 13:43:32.349] E1017 13:43:32.349141   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:32.445] E1017 13:43:32.445332   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:32.539] E1017 13:43:32.538520   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:32.758] I1017 13:43:32.758033   53053 namespace_controller.go:185] Namespace has been deleted other
W1017 13:43:33.240] E1017 13:43:33.240258   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:33.351] E1017 13:43:33.350517   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:33.447] E1017 13:43:33.446849   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:33.541] E1017 13:43:33.540781   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:34.242] E1017 13:43:34.241657   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:34.352] E1017 13:43:34.352008   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:34.449] E1017 13:43:34.449229   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:34.542] E1017 13:43:34.542183   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:35.243] E1017 13:43:35.243082   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:35.354] E1017 13:43:35.353662   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:35.451] E1017 13:43:35.450636   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:35.544] E1017 13:43:35.543399   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:35.971] +++ exit code: 0
I1017 13:43:36.003] Recording: run_configmap_tests
I1017 13:43:36.003] Running command: run_configmap_tests
I1017 13:43:36.024] 
I1017 13:43:36.027] +++ Running case: test-cmd.run_configmap_tests 
I1017 13:43:36.030] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:43:36.032] +++ command: run_configmap_tests
I1017 13:43:36.043] +++ [1017 13:43:36] Creating namespace namespace-1571319816-2727
I1017 13:43:36.113] namespace/namespace-1571319816-2727 created
I1017 13:43:36.181] Context "test" modified.
I1017 13:43:36.187] +++ [1017 13:43:36] Testing configmaps
W1017 13:43:36.288] E1017 13:43:36.244492   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:36.355] E1017 13:43:36.355105   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:36.452] E1017 13:43:36.452018   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:36.545] E1017 13:43:36.544650   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:36.645] configmap/test-configmap created
I1017 13:43:36.646] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I1017 13:43:36.646] (Bconfigmap "test-configmap" deleted
I1017 13:43:36.646] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I1017 13:43:36.713] (Bnamespace/test-configmaps created
I1017 13:43:36.798] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
... skipping 3 lines ...
I1017 13:43:37.120] configmap/test-binary-configmap created
I1017 13:43:37.220] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I1017 13:43:37.316] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I1017 13:43:37.553] (Bconfigmap "test-configmap" deleted
I1017 13:43:37.637] configmap "test-binary-configmap" deleted
I1017 13:43:37.721] namespace "test-configmaps" deleted
W1017 13:43:37.821] E1017 13:43:37.246189   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:37.822] E1017 13:43:37.356543   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:37.822] E1017 13:43:37.453584   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:37.823] E1017 13:43:37.546171   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:38.248] E1017 13:43:38.247819   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:38.358] E1017 13:43:38.358005   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:38.455] E1017 13:43:38.454932   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:38.548] E1017 13:43:38.547917   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:39.250] E1017 13:43:39.249546   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:39.359] E1017 13:43:39.359199   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:39.456] E1017 13:43:39.456196   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:39.550] E1017 13:43:39.549352   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:40.251] E1017 13:43:40.250907   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:40.361] E1017 13:43:40.360449   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:40.458] E1017 13:43:40.458136   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:40.551] E1017 13:43:40.550949   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:40.949] I1017 13:43:40.949062   53053 namespace_controller.go:185] Namespace has been deleted test-secrets
W1017 13:43:41.253] E1017 13:43:41.252456   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:41.362] E1017 13:43:41.361487   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:41.460] E1017 13:43:41.459471   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:41.552] E1017 13:43:41.552446   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:42.254] E1017 13:43:42.253864   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:42.363] E1017 13:43:42.363026   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:42.461] E1017 13:43:42.461061   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:42.554] E1017 13:43:42.553935   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:42.826] +++ exit code: 0
I1017 13:43:42.856] Recording: run_client_config_tests
I1017 13:43:42.857] Running command: run_client_config_tests
I1017 13:43:42.876] 
I1017 13:43:42.879] +++ Running case: test-cmd.run_client_config_tests 
I1017 13:43:42.881] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:43:42.883] +++ command: run_client_config_tests
I1017 13:43:42.895] +++ [1017 13:43:42] Creating namespace namespace-1571319822-955
I1017 13:43:42.967] namespace/namespace-1571319822-955 created
I1017 13:43:43.035] Context "test" modified.
I1017 13:43:43.041] +++ [1017 13:43:43] Testing client config
I1017 13:43:43.106] Successful
I1017 13:43:43.107] message:error: stat missing: no such file or directory
I1017 13:43:43.107] has:missing: no such file or directory
I1017 13:43:43.181] Successful
I1017 13:43:43.181] message:error: stat missing: no such file or directory
I1017 13:43:43.181] has:missing: no such file or directory
I1017 13:43:43.259] Successful
I1017 13:43:43.259] message:error: stat missing: no such file or directory
I1017 13:43:43.259] has:missing: no such file or directory
I1017 13:43:43.335] Successful
I1017 13:43:43.335] message:Error in configuration: context was not found for specified context: missing-context
I1017 13:43:43.335] has:context was not found for specified context: missing-context
I1017 13:43:43.413] Successful
I1017 13:43:43.413] message:error: no server found for cluster "missing-cluster"
I1017 13:43:43.414] has:no server found for cluster "missing-cluster"
I1017 13:43:43.484] Successful
I1017 13:43:43.485] message:error: auth info "missing-user" does not exist
I1017 13:43:43.485] has:auth info "missing-user" does not exist
W1017 13:43:43.586] E1017 13:43:43.255207   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:43.586] E1017 13:43:43.364086   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:43.587] E1017 13:43:43.462304   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:43.587] E1017 13:43:43.555004   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:43.688] Successful
I1017 13:43:43.688] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I1017 13:43:43.689] has:error loading config file
I1017 13:43:43.691] Successful
I1017 13:43:43.691] message:error: stat missing-config: no such file or directory
I1017 13:43:43.691] has:no such file or directory
I1017 13:43:43.704] +++ exit code: 0
I1017 13:43:43.736] Recording: run_service_accounts_tests
I1017 13:43:43.737] Running command: run_service_accounts_tests
I1017 13:43:43.758] 
I1017 13:43:43.761] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I1017 13:43:44.071] (Bnamespace/test-service-accounts created
I1017 13:43:44.161] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I1017 13:43:44.251] (Bserviceaccount/test-service-account created
I1017 13:43:44.354] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I1017 13:43:44.429] (Bserviceaccount "test-service-account" deleted
I1017 13:43:44.511] namespace "test-service-accounts" deleted
W1017 13:43:44.612] E1017 13:43:44.256542   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:44.612] E1017 13:43:44.365673   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:44.613] E1017 13:43:44.463492   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:44.613] E1017 13:43:44.556105   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:45.258] E1017 13:43:45.258149   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:45.367] E1017 13:43:45.366799   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:45.465] E1017 13:43:45.464819   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:45.557] E1017 13:43:45.557353   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:46.260] E1017 13:43:46.259669   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:46.368] E1017 13:43:46.368250   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:46.467] E1017 13:43:46.466667   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:46.559] E1017 13:43:46.558830   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:47.261] E1017 13:43:47.261209   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:47.370] E1017 13:43:47.369746   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:47.468] E1017 13:43:47.468022   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:47.560] E1017 13:43:47.560128   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:47.803] I1017 13:43:47.802755   53053 namespace_controller.go:185] Namespace has been deleted test-configmaps
W1017 13:43:48.263] E1017 13:43:48.262985   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:48.372] E1017 13:43:48.371053   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:48.470] E1017 13:43:48.469610   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:48.562] E1017 13:43:48.561641   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:49.265] E1017 13:43:49.265123   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:49.373] E1017 13:43:49.372504   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:49.471] E1017 13:43:49.471175   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:49.563] E1017 13:43:49.562349   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:49.664] +++ exit code: 0
I1017 13:43:49.664] Recording: run_job_tests
I1017 13:43:49.664] Running command: run_job_tests
I1017 13:43:49.686] 
I1017 13:43:49.688] +++ Running case: test-cmd.run_job_tests 
I1017 13:43:49.691] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I1017 13:43:50.438] Labels:                        run=pi
I1017 13:43:50.439] Annotations:                   <none>
I1017 13:43:50.439] Schedule:                      59 23 31 2 *
I1017 13:43:50.439] Concurrency Policy:            Allow
I1017 13:43:50.439] Suspend:                       False
I1017 13:43:50.440] Successful Job History Limit:  3
I1017 13:43:50.440] Failed Job History Limit:      1
I1017 13:43:50.440] Starting Deadline Seconds:     <unset>
I1017 13:43:50.441] Selector:                      <unset>
I1017 13:43:50.441] Parallelism:                   <unset>
I1017 13:43:50.441] Completions:                   <unset>
I1017 13:43:50.441] Pod Template:
I1017 13:43:50.441]   Labels:  run=pi
... skipping 32 lines ...
I1017 13:43:50.946]                 run=pi
I1017 13:43:50.946] Annotations:    cronjob.kubernetes.io/instantiate: manual
I1017 13:43:50.946] Controlled By:  CronJob/pi
I1017 13:43:50.946] Parallelism:    1
I1017 13:43:50.946] Completions:    1
I1017 13:43:50.946] Start Time:     Thu, 17 Oct 2019 13:43:50 +0000
I1017 13:43:50.947] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I1017 13:43:50.947] Pod Template:
I1017 13:43:50.947]   Labels:  controller-uid=1c16e09d-8667-4b58-b9d2-97e78ec90f9d
I1017 13:43:50.947]            job-name=test-job
I1017 13:43:50.947]            run=pi
I1017 13:43:50.947]   Containers:
I1017 13:43:50.947]    pi:
... skipping 16 lines ...
I1017 13:43:50.949]   ----    ------            ----  ----            -------
I1017 13:43:50.949]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-lpjcs
I1017 13:43:51.029] job.batch "test-job" deleted
I1017 13:43:51.115] cronjob.batch "pi" deleted
I1017 13:43:51.204] namespace "test-jobs" deleted
W1017 13:43:51.305] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1017 13:43:51.306] E1017 13:43:50.266756   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:51.306] E1017 13:43:50.373793   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:51.306] E1017 13:43:50.472328   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:51.306] E1017 13:43:50.563631   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:51.307] I1017 13:43:50.691950   53053 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"1c16e09d-8667-4b58-b9d2-97e78ec90f9d", APIVersion:"batch/v1", ResourceVersion:"1393", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-lpjcs
W1017 13:43:51.307] E1017 13:43:51.268772   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:51.375] E1017 13:43:51.375211   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:51.474] E1017 13:43:51.473805   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:51.565] E1017 13:43:51.565098   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:52.270] E1017 13:43:52.270009   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:52.377] E1017 13:43:52.376614   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:52.475] E1017 13:43:52.475120   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:52.567] E1017 13:43:52.566418   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:53.272] E1017 13:43:53.271794   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:53.378] E1017 13:43:53.378004   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:53.476] E1017 13:43:53.476277   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:53.568] E1017 13:43:53.567548   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:54.274] E1017 13:43:54.273629   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:54.380] E1017 13:43:54.379756   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:54.478] E1017 13:43:54.477617   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:54.569] E1017 13:43:54.569198   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:54.608] I1017 13:43:54.608243   53053 namespace_controller.go:185] Namespace has been deleted test-service-accounts
W1017 13:43:55.276] E1017 13:43:55.275827   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:55.381] E1017 13:43:55.381341   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:55.479] E1017 13:43:55.478728   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:55.570] E1017 13:43:55.570325   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:56.277] E1017 13:43:56.277034   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:43:56.378] +++ exit code: 0
I1017 13:43:56.379] Recording: run_create_job_tests
I1017 13:43:56.379] Running command: run_create_job_tests
I1017 13:43:56.379] 
I1017 13:43:56.379] +++ Running case: test-cmd.run_create_job_tests 
I1017 13:43:56.380] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 27 lines ...
I1017 13:43:57.645] +++ [1017 13:43:57] Testing pod templates
I1017 13:43:57.733] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:43:57.876] (Bpodtemplate/nginx created
I1017 13:43:57.967] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1017 13:43:58.040] (BNAME    CONTAINERS   IMAGES   POD LABELS
I1017 13:43:58.040] nginx   nginx        nginx    name=nginx
W1017 13:43:58.141] E1017 13:43:56.382683   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:58.141] E1017 13:43:56.480052   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:58.142] E1017 13:43:56.571685   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:58.142] I1017 13:43:56.598495   53053 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571319836-24896", Name:"test-job", UID:"390ab495-c279-4cbd-b653-15d9e851ee72", APIVersion:"batch/v1", ResourceVersion:"1411", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-rbr82
W1017 13:43:58.142] I1017 13:43:56.843366   53053 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571319836-24896", Name:"test-job-pi", UID:"f0de2f34-0fbd-415c-a7d4-00aa60221d57", APIVersion:"batch/v1", ResourceVersion:"1418", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-pg5hk
W1017 13:43:58.143] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1017 13:43:58.143] I1017 13:43:57.164379   53053 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571319836-24896", Name:"my-pi", UID:"0776b160-a741-44f8-8ab6-d9a5a32dd3cd", APIVersion:"batch/v1", ResourceVersion:"1426", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-7gkp8
W1017 13:43:58.143] E1017 13:43:57.278731   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:58.143] E1017 13:43:57.383944   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:58.144] E1017 13:43:57.481386   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:58.144] E1017 13:43:57.572733   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:43:58.144] I1017 13:43:57.873621   49514 controller.go:606] quota admission added evaluator for: podtemplates
I1017 13:43:58.244] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1017 13:43:58.304] (Bpodtemplate "nginx" deleted
I1017 13:43:58.397] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:43:58.411] (B+++ exit code: 0
I1017 13:43:58.444] Recording: run_service_tests
... skipping 5 lines ...
I1017 13:43:58.543] Context "test" modified.
I1017 13:43:58.549] +++ [1017 13:43:58] Testing kubectl(v1:services)
I1017 13:43:58.642] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 13:43:58.788] (Bservice/redis-master created
I1017 13:43:58.875] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1017 13:43:58.961] (B
I1017 13:43:58.965] core.sh:864: FAIL!
I1017 13:43:58.965] Describe services redis-master
I1017 13:43:58.966]   Expected Match: Name:
I1017 13:43:58.966]   Not found in:
I1017 13:43:58.966] Name:              redis-master
I1017 13:43:58.966] Namespace:         default
I1017 13:43:58.966] Labels:            app=redis
... skipping 56 lines ...
I1017 13:43:59.235] TargetPort:        6379/TCP
I1017 13:43:59.235] Endpoints:         <none>
I1017 13:43:59.235] Session Affinity:  None
I1017 13:43:59.235] Events:            <none>
I1017 13:43:59.235] (B
I1017 13:43:59.335] 
I1017 13:43:59.335] FAIL!
I1017 13:43:59.335] Describe services
I1017 13:43:59.335]   Expected Match: Name:
I1017 13:43:59.335]   Not found in:
I1017 13:43:59.335] Name:              kubernetes
I1017 13:43:59.336] Namespace:         default
I1017 13:43:59.336] Labels:            component=apiserver
... skipping 183 lines ...
I1017 13:44:00.251]   selector:
I1017 13:44:00.251]     role: padawan
I1017 13:44:00.251]   sessionAffinity: None
I1017 13:44:00.251]   type: ClusterIP
I1017 13:44:00.251] status:
I1017 13:44:00.251]   loadBalancer: {}
W1017 13:44:00.352] E1017 13:43:58.280423   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:00.353] E1017 13:43:58.385326   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:00.353] E1017 13:43:58.482784   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:00.354] E1017 13:43:58.574268   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:00.354] E1017 13:43:59.281827   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:00.355] E1017 13:43:59.389959   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:00.355] E1017 13:43:59.484320   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:00.356] E1017 13:43:59.576172   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:00.357] E1017 13:44:00.283538   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:00.357] error: you must specify resources by --filename when --local is set.
W1017 13:44:00.357] Example resource specifications include:
W1017 13:44:00.357]    '-f rsrc.yaml'
W1017 13:44:00.357]    '--filename=rsrc.json'
W1017 13:44:00.391] E1017 13:44:00.391215   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:00.486] E1017 13:44:00.485698   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:00.577] E1017 13:44:00.577289   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:00.678] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1017 13:44:00.678] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1017 13:44:00.679] (Bservice "redis-master" deleted
I1017 13:44:00.749] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 13:44:00.848] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 13:44:01.017] (Bservice/redis-master created
I1017 13:44:01.125] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1017 13:44:01.232] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1017 13:44:01.389] (Bservice/service-v1-test created
I1017 13:44:01.490] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I1017 13:44:01.651] (Bservice/service-v1-test replaced
I1017 13:44:01.754] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I1017 13:44:01.858] (Bservice "redis-master" deleted
W1017 13:44:01.959] E1017 13:44:01.285072   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:01.959] I1017 13:44:01.294476   53053 namespace_controller.go:185] Namespace has been deleted test-jobs
W1017 13:44:01.959] E1017 13:44:01.392333   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:01.959] E1017 13:44:01.486940   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:01.960] E1017 13:44:01.578300   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:02.060] service "service-v1-test" deleted
I1017 13:44:02.060] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 13:44:02.151] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 13:44:02.308] (Bservice/redis-master created
W1017 13:44:02.409] E1017 13:44:02.287144   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:02.409] E1017 13:44:02.393618   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:02.489] E1017 13:44:02.488472   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:02.580] E1017 13:44:02.579749   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:02.681] service/redis-slave created
I1017 13:44:02.681] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I1017 13:44:02.682] (BSuccessful
I1017 13:44:02.682] message:NAME           RSRC
I1017 13:44:02.682] kubernetes     145
I1017 13:44:02.682] redis-master   1461
... skipping 54 lines ...
I1017 13:44:05.875] +++ [1017 13:44:05] Creating namespace namespace-1571319845-1345
I1017 13:44:05.945] namespace/namespace-1571319845-1345 created
I1017 13:44:06.012] Context "test" modified.
I1017 13:44:06.018] +++ [1017 13:44:06] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
I1017 13:44:06.102] apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:06.253] (Bdaemonset.apps/bind created
W1017 13:44:06.353] E1017 13:44:03.288382   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.354] E1017 13:44:03.395035   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.354] E1017 13:44:03.490673   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.354] E1017 13:44:03.581052   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.354] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1017 13:44:06.355] I1017 13:44:03.600943   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"3aa6d04d-e575-4bfb-b0f6-26172d187990", APIVersion:"apps/v1", ResourceVersion:"1477", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
W1017 13:44:06.355] I1017 13:44:03.606468   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"5d40ea21-d4b4-438e-8c8f-20351a036f43", APIVersion:"apps/v1", ResourceVersion:"1478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-vj8v9
W1017 13:44:06.355] I1017 13:44:03.609781   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"5d40ea21-d4b4-438e-8c8f-20351a036f43", APIVersion:"apps/v1", ResourceVersion:"1478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-tf5db
W1017 13:44:06.356] E1017 13:44:04.290056   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.356] E1017 13:44:04.396436   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.356] E1017 13:44:04.492091   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.356] E1017 13:44:04.582484   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.356] I1017 13:44:04.612780   49514 controller.go:606] quota admission added evaluator for: daemonsets.apps
W1017 13:44:06.357] I1017 13:44:04.622395   49514 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W1017 13:44:06.357] E1017 13:44:05.291569   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.357] E1017 13:44:05.397657   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.358] E1017 13:44:05.493695   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.358] E1017 13:44:05.583844   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.358] E1017 13:44:06.293127   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.400] E1017 13:44:06.399502   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.495] E1017 13:44:06.495304   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:06.585] E1017 13:44:06.585187   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:06.687] apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1571319845-1345"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I1017 13:44:06.687]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
I1017 13:44:06.688] (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
I1017 13:44:06.688] apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1017 13:44:06.688] (Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1017 13:44:06.784] (Bdaemonset.apps/bind configured
... skipping 18 lines ...
I1017 13:44:07.417] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:44:07.508] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1017 13:44:07.607] (Bdaemonset.apps/bind rolled back
I1017 13:44:07.698] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1017 13:44:07.787] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1017 13:44:07.889] (BSuccessful
I1017 13:44:07.890] message:error: unable to find specified revision 1000000 in history
I1017 13:44:07.890] has:unable to find specified revision
I1017 13:44:07.979] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1017 13:44:08.065] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1017 13:44:08.166] (Bdaemonset.apps/bind rolled back
I1017 13:44:08.270] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1017 13:44:08.359] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 13 lines ...
I1017 13:44:08.848] core.sh:1046: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:08.992] (Breplicationcontroller/frontend created
I1017 13:44:09.074] replicationcontroller "frontend" deleted
I1017 13:44:09.168] core.sh:1051: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:09.257] (Bcore.sh:1055: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:09.420] (Breplicationcontroller/frontend created
W1017 13:44:09.521] E1017 13:44:07.294601   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:09.522] E1017 13:44:07.400802   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:09.523] E1017 13:44:07.496631   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:09.523] E1017 13:44:07.586372   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:09.529] E1017 13:44:07.618677   53053 daemon_controller.go:302] namespace-1571319845-1345/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1571319845-1345", SelfLink:"/apis/apps/v1/namespaces/namespace-1571319845-1345/daemonsets/bind", UID:"eb716880-144c-4bcf-a75c-a902fa63563a", ResourceVersion:"1542", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63706916646, loc:(*time.Location)(0x7763040)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1571319845-1345\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000a3c860), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c4ccd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002331ce0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc000a3c880), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00182b398)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001c4cd2c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W1017 13:44:09.530] E1017 13:44:08.296073   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:09.530] E1017 13:44:08.402067   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:09.531] E1017 13:44:08.497884   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:09.531] E1017 13:44:08.587965   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:09.532] I1017 13:44:08.996899   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"62197af3-d0c2-42c8-b649-2373de678f54", APIVersion:"v1", ResourceVersion:"1555", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rxbgv
W1017 13:44:09.532] I1017 13:44:08.998753   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"62197af3-d0c2-42c8-b649-2373de678f54", APIVersion:"v1", ResourceVersion:"1555", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pntwz
W1017 13:44:09.533] I1017 13:44:08.999611   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"62197af3-d0c2-42c8-b649-2373de678f54", APIVersion:"v1", ResourceVersion:"1555", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4jm4j
W1017 13:44:09.533] E1017 13:44:09.298082   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:09.534] E1017 13:44:09.403717   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:09.534] I1017 13:44:09.423756   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"315c736b-fccf-4956-847a-259e2f1288e2", APIVersion:"v1", ResourceVersion:"1571", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7d4kg
W1017 13:44:09.535] I1017 13:44:09.428801   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"315c736b-fccf-4956-847a-259e2f1288e2", APIVersion:"v1", ResourceVersion:"1571", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-h9nm7
W1017 13:44:09.535] I1017 13:44:09.429198   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"315c736b-fccf-4956-847a-259e2f1288e2", APIVersion:"v1", ResourceVersion:"1571", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kbvd6
W1017 13:44:09.536] E1017 13:44:09.499336   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:09.590] E1017 13:44:09.589400   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:09.690] core.sh:1059: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1017 13:44:09.690] (B
I1017 13:44:09.691] core.sh:1061: FAIL!
I1017 13:44:09.691] Describe rc frontend
I1017 13:44:09.691]   Expected Match: Name:
I1017 13:44:09.691]   Not found in:
I1017 13:44:09.691] Name:         frontend
I1017 13:44:09.691] Namespace:    namespace-1571319848-6617
I1017 13:44:09.691] Selector:     app=guestbook,tier=frontend
I1017 13:44:09.691] Labels:       app=guestbook
I1017 13:44:09.691]               tier=frontend
I1017 13:44:09.691] Annotations:  <none>
I1017 13:44:09.691] Replicas:     3 current / 3 desired
I1017 13:44:09.691] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:09.692] Pod Template:
I1017 13:44:09.692]   Labels:  app=guestbook
I1017 13:44:09.692]            tier=frontend
I1017 13:44:09.692]   Containers:
I1017 13:44:09.692]    php-redis:
I1017 13:44:09.692]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 20 lines ...
I1017 13:44:09.752] Namespace:    namespace-1571319848-6617
I1017 13:44:09.752] Selector:     app=guestbook,tier=frontend
I1017 13:44:09.752] Labels:       app=guestbook
I1017 13:44:09.752]               tier=frontend
I1017 13:44:09.752] Annotations:  <none>
I1017 13:44:09.752] Replicas:     3 current / 3 desired
I1017 13:44:09.752] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:09.753] Pod Template:
I1017 13:44:09.753]   Labels:  app=guestbook
I1017 13:44:09.753]            tier=frontend
I1017 13:44:09.753]   Containers:
I1017 13:44:09.753]    php-redis:
I1017 13:44:09.753]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1017 13:44:09.854] Namespace:    namespace-1571319848-6617
I1017 13:44:09.854] Selector:     app=guestbook,tier=frontend
I1017 13:44:09.854] Labels:       app=guestbook
I1017 13:44:09.854]               tier=frontend
I1017 13:44:09.854] Annotations:  <none>
I1017 13:44:09.854] Replicas:     3 current / 3 desired
I1017 13:44:09.854] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:09.855] Pod Template:
I1017 13:44:09.855]   Labels:  app=guestbook
I1017 13:44:09.855]            tier=frontend
I1017 13:44:09.855]   Containers:
I1017 13:44:09.855]    php-redis:
I1017 13:44:09.855]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1017 13:44:09.952] Namespace:    namespace-1571319848-6617
I1017 13:44:09.952] Selector:     app=guestbook,tier=frontend
I1017 13:44:09.952] Labels:       app=guestbook
I1017 13:44:09.952]               tier=frontend
I1017 13:44:09.952] Annotations:  <none>
I1017 13:44:09.952] Replicas:     3 current / 3 desired
I1017 13:44:09.952] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:09.952] Pod Template:
I1017 13:44:09.953]   Labels:  app=guestbook
I1017 13:44:09.953]            tier=frontend
I1017 13:44:09.953]   Containers:
I1017 13:44:09.953]    php-redis:
I1017 13:44:09.953]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1017 13:44:09.954]   ----    ------            ----  ----                    -------
I1017 13:44:09.954]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-7d4kg
I1017 13:44:09.954]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-h9nm7
I1017 13:44:09.954]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-kbvd6
I1017 13:44:09.954] (B
I1017 13:44:10.049] 
I1017 13:44:10.050] FAIL!
I1017 13:44:10.050] Describe rc
I1017 13:44:10.050]   Expected Match: Name:
I1017 13:44:10.050]   Not found in:
I1017 13:44:10.050] Name:         frontend
I1017 13:44:10.050] Namespace:    namespace-1571319848-6617
I1017 13:44:10.050] Selector:     app=guestbook,tier=frontend
I1017 13:44:10.051] Labels:       app=guestbook
I1017 13:44:10.051]               tier=frontend
I1017 13:44:10.051] Annotations:  <none>
I1017 13:44:10.051] Replicas:     3 current / 3 desired
I1017 13:44:10.051] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:10.051] Pod Template:
I1017 13:44:10.051]   Labels:  app=guestbook
I1017 13:44:10.051]            tier=frontend
I1017 13:44:10.052]   Containers:
I1017 13:44:10.052]    php-redis:
I1017 13:44:10.052]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 20 lines ...
I1017 13:44:10.151] Namespace:    namespace-1571319848-6617
I1017 13:44:10.151] Selector:     app=guestbook,tier=frontend
I1017 13:44:10.151] Labels:       app=guestbook
I1017 13:44:10.151]               tier=frontend
I1017 13:44:10.151] Annotations:  <none>
I1017 13:44:10.151] Replicas:     3 current / 3 desired
I1017 13:44:10.151] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:10.151] Pod Template:
I1017 13:44:10.152]   Labels:  app=guestbook
I1017 13:44:10.152]            tier=frontend
I1017 13:44:10.152]   Containers:
I1017 13:44:10.152]    php-redis:
I1017 13:44:10.152]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1017 13:44:10.260] Namespace:    namespace-1571319848-6617
I1017 13:44:10.260] Selector:     app=guestbook,tier=frontend
I1017 13:44:10.260] Labels:       app=guestbook
I1017 13:44:10.260]               tier=frontend
I1017 13:44:10.260] Annotations:  <none>
I1017 13:44:10.260] Replicas:     3 current / 3 desired
I1017 13:44:10.260] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:10.260] Pod Template:
I1017 13:44:10.260]   Labels:  app=guestbook
I1017 13:44:10.261]            tier=frontend
I1017 13:44:10.261]   Containers:
I1017 13:44:10.261]    php-redis:
I1017 13:44:10.261]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1017 13:44:10.365] Namespace:    namespace-1571319848-6617
I1017 13:44:10.365] Selector:     app=guestbook,tier=frontend
I1017 13:44:10.365] Labels:       app=guestbook
I1017 13:44:10.365]               tier=frontend
I1017 13:44:10.365] Annotations:  <none>
I1017 13:44:10.365] Replicas:     3 current / 3 desired
I1017 13:44:10.365] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:10.366] Pod Template:
I1017 13:44:10.366]   Labels:  app=guestbook
I1017 13:44:10.366]            tier=frontend
I1017 13:44:10.366]   Containers:
I1017 13:44:10.366]    php-redis:
I1017 13:44:10.366]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 16 lines ...
I1017 13:44:10.539] (Breplicationcontroller/frontend scaled
I1017 13:44:10.632] core.sh:1083: Successful get rc frontend {{.spec.replicas}}: 2
I1017 13:44:10.719] (Bcore.sh:1087: Successful get rc frontend {{.spec.replicas}}: 2
I1017 13:44:10.903] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 2
I1017 13:44:10.994] (Bcore.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I1017 13:44:11.076] (Breplicationcontroller/frontend scaled
W1017 13:44:11.177] E1017 13:44:10.299524   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:11.178] E1017 13:44:10.405171   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:11.178] E1017 13:44:10.500537   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:11.178] I1017 13:44:10.544451   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"315c736b-fccf-4956-847a-259e2f1288e2", APIVersion:"v1", ResourceVersion:"1582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-7d4kg
W1017 13:44:11.179] E1017 13:44:10.590789   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:11.179] error: Expected replicas to be 3, was 2
W1017 13:44:11.180] I1017 13:44:11.078939   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"315c736b-fccf-4956-847a-259e2f1288e2", APIVersion:"v1", ResourceVersion:"1588", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-x94xf
I1017 13:44:11.281] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I1017 13:44:11.291] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I1017 13:44:11.367] (Breplicationcontroller/frontend scaled
I1017 13:44:11.465] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I1017 13:44:11.557] (Breplicationcontroller "frontend" deleted
W1017 13:44:11.658] E1017 13:44:11.300993   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:11.659] I1017 13:44:11.373048   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"315c736b-fccf-4956-847a-259e2f1288e2", APIVersion:"v1", ResourceVersion:"1593", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-x94xf
W1017 13:44:11.659] E1017 13:44:11.406586   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:11.659] E1017 13:44:11.502639   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:11.660] E1017 13:44:11.592478   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:11.736] I1017 13:44:11.735506   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"redis-master", UID:"a2b88314-de26-4569-91c2-8833f88cdd93", APIVersion:"v1", ResourceVersion:"1605", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-8t749
I1017 13:44:11.836] replicationcontroller/redis-master created
I1017 13:44:11.917] replicationcontroller/redis-slave created
I1017 13:44:12.010] replicationcontroller/redis-master scaled
I1017 13:44:12.013] replicationcontroller/redis-slave scaled
I1017 13:44:12.105] core.sh:1117: Successful get rc redis-master {{.spec.replicas}}: 4
... skipping 4 lines ...
W1017 13:44:12.382] I1017 13:44:11.924053   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"redis-slave", UID:"1250b167-eb5f-468f-b900-4b5c9ba8e6a0", APIVersion:"v1", ResourceVersion:"1610", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-dt2lh
W1017 13:44:12.383] I1017 13:44:12.013784   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"redis-master", UID:"a2b88314-de26-4569-91c2-8833f88cdd93", APIVersion:"v1", ResourceVersion:"1617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-2kgbn
W1017 13:44:12.384] I1017 13:44:12.015975   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"redis-master", UID:"a2b88314-de26-4569-91c2-8833f88cdd93", APIVersion:"v1", ResourceVersion:"1617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-s8g46
W1017 13:44:12.384] I1017 13:44:12.016379   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"redis-master", UID:"a2b88314-de26-4569-91c2-8833f88cdd93", APIVersion:"v1", ResourceVersion:"1617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-sjjwt
W1017 13:44:12.385] I1017 13:44:12.018659   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"redis-slave", UID:"1250b167-eb5f-468f-b900-4b5c9ba8e6a0", APIVersion:"v1", ResourceVersion:"1618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-p8825
W1017 13:44:12.385] I1017 13:44:12.021086   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"redis-slave", UID:"1250b167-eb5f-468f-b900-4b5c9ba8e6a0", APIVersion:"v1", ResourceVersion:"1618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-bptgc
W1017 13:44:12.386] E1017 13:44:12.302167   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:12.408] E1017 13:44:12.408021   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:12.446] I1017 13:44:12.445329   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment", UID:"6dd4d94f-686f-4172-85f2-a7daba2158f8", APIVersion:"apps/v1", ResourceVersion:"1651", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1017 13:44:12.448] I1017 13:44:12.447654   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-6986c7bc94", UID:"1aa72727-fe3a-408d-be9c-2e18e8d50f82", APIVersion:"apps/v1", ResourceVersion:"1652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-t8hvx
W1017 13:44:12.451] I1017 13:44:12.450312   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-6986c7bc94", UID:"1aa72727-fe3a-408d-be9c-2e18e8d50f82", APIVersion:"apps/v1", ResourceVersion:"1652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-6qdhw
W1017 13:44:12.451] I1017 13:44:12.450666   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-6986c7bc94", UID:"1aa72727-fe3a-408d-be9c-2e18e8d50f82", APIVersion:"apps/v1", ResourceVersion:"1652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-66x9t
W1017 13:44:12.505] E1017 13:44:12.504824   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:12.548] I1017 13:44:12.548096   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment", UID:"6dd4d94f-686f-4172-85f2-a7daba2158f8", APIVersion:"apps/v1", ResourceVersion:"1665", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
W1017 13:44:12.557] I1017 13:44:12.556391   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-6986c7bc94", UID:"1aa72727-fe3a-408d-be9c-2e18e8d50f82", APIVersion:"apps/v1", ResourceVersion:"1666", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-t8hvx
W1017 13:44:12.558] I1017 13:44:12.557867   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-6986c7bc94", UID:"1aa72727-fe3a-408d-be9c-2e18e8d50f82", APIVersion:"apps/v1", ResourceVersion:"1666", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-66x9t
W1017 13:44:12.594] E1017 13:44:12.593730   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:12.694] deployment.apps/nginx-deployment created
I1017 13:44:12.695] deployment.apps/nginx-deployment scaled
I1017 13:44:12.695] core.sh:1127: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
I1017 13:44:12.734] (Bdeployment.apps "nginx-deployment" deleted
I1017 13:44:12.834] Successful
I1017 13:44:12.835] message:service/expose-test-deployment exposed
I1017 13:44:12.835] has:service/expose-test-deployment exposed
I1017 13:44:12.911] service "expose-test-deployment" deleted
I1017 13:44:12.996] Successful
I1017 13:44:12.997] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1017 13:44:12.997] See 'kubectl expose -h' for help and examples
I1017 13:44:12.997] has:invalid deployment: no selectors
I1017 13:44:13.142] deployment.apps/nginx-deployment created
I1017 13:44:13.240] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I1017 13:44:13.342] (Bservice/nginx-deployment exposed
I1017 13:44:13.435] core.sh:1150: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
I1017 13:44:13.511] (Bdeployment.apps "nginx-deployment" deleted
I1017 13:44:13.522] service "nginx-deployment" deleted
W1017 13:44:13.623] I1017 13:44:13.145016   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment", UID:"1fe69fca-7bc9-4631-86db-2dbe7c8d3e9d", APIVersion:"apps/v1", ResourceVersion:"1689", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1017 13:44:13.623] I1017 13:44:13.148058   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-6986c7bc94", UID:"12cf33c0-7852-470f-9034-9f4d8858adc8", APIVersion:"apps/v1", ResourceVersion:"1690", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-rhbl2
W1017 13:44:13.623] I1017 13:44:13.150843   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-6986c7bc94", UID:"12cf33c0-7852-470f-9034-9f4d8858adc8", APIVersion:"apps/v1", ResourceVersion:"1690", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-hlw66
W1017 13:44:13.624] I1017 13:44:13.150929   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-6986c7bc94", UID:"12cf33c0-7852-470f-9034-9f4d8858adc8", APIVersion:"apps/v1", ResourceVersion:"1690", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-pj4zx
W1017 13:44:13.624] E1017 13:44:13.303680   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:13.624] E1017 13:44:13.409362   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:13.624] E1017 13:44:13.505956   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:13.625] E1017 13:44:13.595013   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:13.676] I1017 13:44:13.675687   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"cbe9bd22-e8b2-4d03-84bc-5a83aa3316d0", APIVersion:"v1", ResourceVersion:"1718", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bbrr9
W1017 13:44:13.679] I1017 13:44:13.678454   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"cbe9bd22-e8b2-4d03-84bc-5a83aa3316d0", APIVersion:"v1", ResourceVersion:"1718", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gc78p
W1017 13:44:13.680] I1017 13:44:13.679515   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"cbe9bd22-e8b2-4d03-84bc-5a83aa3316d0", APIVersion:"v1", ResourceVersion:"1718", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cm2h8
I1017 13:44:13.780] replicationcontroller/frontend created
I1017 13:44:13.781] core.sh:1157: Successful get rc frontend {{.spec.replicas}}: 3
I1017 13:44:13.854] (Bservice/frontend exposed
... skipping 11 lines ...
I1017 13:44:14.965] service "frontend" deleted
I1017 13:44:14.973] service "frontend-2" deleted
I1017 13:44:14.981] service "frontend-3" deleted
I1017 13:44:14.988] service "frontend-4" deleted
I1017 13:44:14.996] service "frontend-5" deleted
I1017 13:44:15.085] Successful
I1017 13:44:15.085] message:error: cannot expose a Node
I1017 13:44:15.085] has:cannot expose
I1017 13:44:15.171] Successful
I1017 13:44:15.171] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I1017 13:44:15.172] has:metadata.name: Invalid value
W1017 13:44:15.272] E1017 13:44:14.305197   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:15.272] E1017 13:44:14.410659   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:15.273] E1017 13:44:14.507495   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:15.273] E1017 13:44:14.596306   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:15.307] E1017 13:44:15.306755   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:15.407] Successful
I1017 13:44:15.408] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I1017 13:44:15.408] has:kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I1017 13:44:15.408] service "kubernetes-serve-hostname-testing-sixty-three-characters-in-len" deleted
I1017 13:44:15.466] Successful
I1017 13:44:15.467] message:service/etcd-server exposed
... skipping 3 lines ...
I1017 13:44:15.720] (Bservice "etcd-server" deleted
I1017 13:44:15.813] core.sh:1215: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1017 13:44:15.887] (Breplicationcontroller "frontend" deleted
I1017 13:44:15.979] core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:16.062] (Bcore.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:16.205] (Breplicationcontroller/frontend created
W1017 13:44:16.306] E1017 13:44:15.412312   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:16.306] E1017 13:44:15.508800   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:16.307] E1017 13:44:15.597788   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:16.307] I1017 13:44:16.208485   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"dd4d4d5a-b559-4616-87b0-177debe11dc1", APIVersion:"v1", ResourceVersion:"1780", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r6g6v
W1017 13:44:16.307] I1017 13:44:16.211184   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"dd4d4d5a-b559-4616-87b0-177debe11dc1", APIVersion:"v1", ResourceVersion:"1780", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-l7z4g
W1017 13:44:16.308] I1017 13:44:16.211785   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"dd4d4d5a-b559-4616-87b0-177debe11dc1", APIVersion:"v1", ResourceVersion:"1780", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nqxj7
W1017 13:44:16.310] E1017 13:44:16.309660   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:16.381] I1017 13:44:16.380627   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"redis-slave", UID:"97156bc8-18fa-46b2-86e8-bc69f4e72e2b", APIVersion:"v1", ResourceVersion:"1789", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-7nrw7
W1017 13:44:16.384] I1017 13:44:16.383413   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"redis-slave", UID:"97156bc8-18fa-46b2-86e8-bc69f4e72e2b", APIVersion:"v1", ResourceVersion:"1789", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-lvn68
W1017 13:44:16.414] E1017 13:44:16.413755   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:16.510] E1017 13:44:16.510229   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:16.599] E1017 13:44:16.599258   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:16.700] replicationcontroller/redis-slave created
I1017 13:44:16.700] core.sh:1228: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I1017 13:44:16.701] (Bcore.sh:1232: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I1017 13:44:16.701] (Breplicationcontroller "frontend" deleted
I1017 13:44:16.701] replicationcontroller "redis-slave" deleted
I1017 13:44:16.743] core.sh:1236: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:16.824] (Bcore.sh:1240: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:16.962] (Breplicationcontroller/frontend created
I1017 13:44:17.054] core.sh:1243: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1017 13:44:17.126] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
W1017 13:44:17.227] I1017 13:44:16.965678   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"7da87e07-4db7-46c9-bcb8-f5c8bb82256a", APIVersion:"v1", ResourceVersion:"1808", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-f82bl
W1017 13:44:17.228] I1017 13:44:16.967908   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"7da87e07-4db7-46c9-bcb8-f5c8bb82256a", APIVersion:"v1", ResourceVersion:"1808", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8lbcc
W1017 13:44:17.228] I1017 13:44:16.968135   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571319848-6617", Name:"frontend", UID:"7da87e07-4db7-46c9-bcb8-f5c8bb82256a", APIVersion:"v1", ResourceVersion:"1808", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-q4cz2
W1017 13:44:17.311] E1017 13:44:17.310923   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:17.412] core.sh:1246: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I1017 13:44:17.412] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I1017 13:44:17.412] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1017 13:44:17.483] core.sh:1250: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1017 13:44:17.556] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W1017 13:44:17.657] E1017 13:44:17.415222   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:17.657] E1017 13:44:17.511548   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:17.657] E1017 13:44:17.601402   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:17.658] Error: required flag(s) "max" not set
W1017 13:44:17.658] 
W1017 13:44:17.658] 
W1017 13:44:17.658] Examples:
W1017 13:44:17.658]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1017 13:44:17.658]   kubectl autoscale deployment foo --min=2 --max=10
W1017 13:44:17.658]   
... skipping 54 lines ...
I1017 13:44:17.877]           limits:
I1017 13:44:17.877]             cpu: 300m
I1017 13:44:17.877]           requests:
I1017 13:44:17.877]             cpu: 300m
I1017 13:44:17.877]       terminationGracePeriodSeconds: 0
I1017 13:44:17.877] status: {}
W1017 13:44:17.977] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I1017 13:44:18.098] deployment.apps/nginx-deployment-resources created
I1017 13:44:18.201] core.sh:1265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I1017 13:44:18.291] (Bcore.sh:1266: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:44:18.378] (Bcore.sh:1267: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1017 13:44:18.465] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
I1017 13:44:18.557] core.sh:1270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 3 lines ...
I1017 13:44:19.005] (Bcore.sh:1277: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I1017 13:44:19.086] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W1017 13:44:19.187] I1017 13:44:18.102083   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-resources", UID:"da2009ae-f612-4b34-9b01-eedc9480b63f", APIVersion:"apps/v1", ResourceVersion:"1829", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-67f8cfff5 to 3
W1017 13:44:19.188] I1017 13:44:18.105065   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-resources-67f8cfff5", UID:"e25550e3-dc8e-4afd-88b2-1ecc11609028", APIVersion:"apps/v1", ResourceVersion:"1830", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-w9gjv
W1017 13:44:19.188] I1017 13:44:18.108239   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-resources-67f8cfff5", UID:"e25550e3-dc8e-4afd-88b2-1ecc11609028", APIVersion:"apps/v1", ResourceVersion:"1830", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-zg4rr
W1017 13:44:19.189] I1017 13:44:18.108724   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-resources-67f8cfff5", UID:"e25550e3-dc8e-4afd-88b2-1ecc11609028", APIVersion:"apps/v1", ResourceVersion:"1830", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-vmp8q
W1017 13:44:19.189] E1017 13:44:18.312301   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:19.190] E1017 13:44:18.416588   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:19.190] I1017 13:44:18.468925   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-resources", UID:"da2009ae-f612-4b34-9b01-eedc9480b63f", APIVersion:"apps/v1", ResourceVersion:"1845", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-55c547f795 to 1
W1017 13:44:19.191] I1017 13:44:18.472101   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-resources-55c547f795", UID:"4aa3ffbc-68f2-4f76-9b9f-f2b9af0ba3c3", APIVersion:"apps/v1", ResourceVersion:"1846", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-55c547f795-r9hbf
W1017 13:44:19.191] E1017 13:44:18.513104   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:19.191] E1017 13:44:18.602542   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:19.192] error: unable to find container named redis
W1017 13:44:19.192] I1017 13:44:18.828911   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-resources", UID:"da2009ae-f612-4b34-9b01-eedc9480b63f", APIVersion:"apps/v1", ResourceVersion:"1855", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-55c547f795 to 0
W1017 13:44:19.193] I1017 13:44:18.836130   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-resources-55c547f795", UID:"4aa3ffbc-68f2-4f76-9b9f-f2b9af0ba3c3", APIVersion:"apps/v1", ResourceVersion:"1859", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-55c547f795-r9hbf
W1017 13:44:19.193] I1017 13:44:18.844320   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-resources", UID:"da2009ae-f612-4b34-9b01-eedc9480b63f", APIVersion:"apps/v1", ResourceVersion:"1857", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6d86564b45 to 1
W1017 13:44:19.193] I1017 13:44:18.847058   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-resources-6d86564b45", UID:"55a48e5b-3c43-43cb-a9e3-782926001f9e", APIVersion:"apps/v1", ResourceVersion:"1863", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6d86564b45-9sv62
W1017 13:44:19.194] I1017 13:44:19.095112   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-resources", UID:"da2009ae-f612-4b34-9b01-eedc9480b63f", APIVersion:"apps/v1", ResourceVersion:"1874", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 2
W1017 13:44:19.194] I1017 13:44:19.099720   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319848-6617", Name:"nginx-deployment-resources-67f8cfff5", UID:"e25550e3-dc8e-4afd-88b2-1ecc11609028", APIVersion:"apps/v1", ResourceVersion:"1878", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-w9gjv
... skipping 76 lines ...
I1017 13:44:19.471]     status: "True"
I1017 13:44:19.471]     type: Progressing
I1017 13:44:19.471]   observedGeneration: 4
I1017 13:44:19.471]   replicas: 4
I1017 13:44:19.471]   unavailableReplicas: 4
I1017 13:44:19.471]   updatedReplicas: 1
W1017 13:44:19.571] E1017 13:44:19.313956   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:19.572] E1017 13:44:19.417864   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:19.572] E1017 13:44:19.514737   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:19.572] error: you must specify resources by --filename when --local is set.
W1017 13:44:19.572] Example resource specifications include:
W1017 13:44:19.572]    '-f rsrc.yaml'
W1017 13:44:19.573]    '--filename=rsrc.json'
W1017 13:44:19.604] E1017 13:44:19.604014   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:19.705] core.sh:1286: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1017 13:44:19.708] (Bcore.sh:1287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1017 13:44:19.793] (Bcore.sh:1288: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
I1017 13:44:19.869] (Bdeployment.apps "nginx-deployment-resources" deleted
I1017 13:44:19.888] +++ exit code: 0
I1017 13:44:19.922] Recording: run_deployment_tests
... skipping 22 lines ...
I1017 13:44:20.804] has:10
I1017 13:44:20.899] Successful
I1017 13:44:20.899] message:apps/v1
I1017 13:44:20.899] has:apps/v1
W1017 13:44:21.000] I1017 13:44:20.193424   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"test-nginx-extensions", UID:"bcf8e973-de0e-4a96-9435-2baef9b64166", APIVersion:"apps/v1", ResourceVersion:"1912", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-5559c76db7 to 1
W1017 13:44:21.000] I1017 13:44:20.198643   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"test-nginx-extensions-5559c76db7", UID:"0514efcb-4784-4974-8cac-c82cae46aa69", APIVersion:"apps/v1", ResourceVersion:"1913", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-5559c76db7-r982c
W1017 13:44:21.001] E1017 13:44:20.315754   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:21.001] E1017 13:44:20.419104   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:21.001] E1017 13:44:20.516204   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:21.002] E1017 13:44:20.605415   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:21.002] I1017 13:44:20.628027   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"test-nginx-apps", UID:"bd99b6ad-c849-4fbc-86be-12e237d35529", APIVersion:"apps/v1", ResourceVersion:"1926", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-apps-79b9bd9585 to 1
W1017 13:44:21.002] I1017 13:44:20.632261   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"test-nginx-apps-79b9bd9585", UID:"c6d8700a-3f7e-4e1e-8fb8-73376e623d58", APIVersion:"apps/v1", ResourceVersion:"1927", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-apps-79b9bd9585-nptc5
I1017 13:44:21.103] 
I1017 13:44:21.103] FAIL!
I1017 13:44:21.103] Describe rs
I1017 13:44:21.103]   Expected Match: Name:
I1017 13:44:21.103]   Not found in:
I1017 13:44:21.103] Name:           test-nginx-apps-79b9bd9585
I1017 13:44:21.103] Namespace:      namespace-1571319859-328
I1017 13:44:21.103] Selector:       app=test-nginx-apps,pod-template-hash=79b9bd9585
I1017 13:44:21.104] Labels:         app=test-nginx-apps
I1017 13:44:21.104]                 pod-template-hash=79b9bd9585
I1017 13:44:21.104] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I1017 13:44:21.104]                 deployment.kubernetes.io/max-replicas: 2
I1017 13:44:21.104]                 deployment.kubernetes.io/revision: 1
I1017 13:44:21.104] Controlled By:  Deployment/test-nginx-apps
I1017 13:44:21.104] Replicas:       1 current / 1 desired
I1017 13:44:21.105] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:21.105] Pod Template:
I1017 13:44:21.105]   Labels:  app=test-nginx-apps
I1017 13:44:21.105]            pod-template-hash=79b9bd9585
I1017 13:44:21.105]   Containers:
I1017 13:44:21.105]    nginx:
I1017 13:44:21.105]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 7 lines ...
I1017 13:44:21.106]   ----    ------            ----  ----                   -------
I1017 13:44:21.106]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: test-nginx-apps-79b9bd9585-nptc5
I1017 13:44:21.107] (B
I1017 13:44:21.107] 206 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/apps.sh
I1017 13:44:21.107] (B
I1017 13:44:21.107] 
I1017 13:44:21.107] FAIL!
I1017 13:44:21.107] Describe pods
I1017 13:44:21.107]   Expected Match: Name:
I1017 13:44:21.107]   Not found in:
I1017 13:44:21.107] Name:           test-nginx-apps-79b9bd9585-nptc5
I1017 13:44:21.108] Namespace:      namespace-1571319859-328
I1017 13:44:21.108] Priority:       0
... skipping 28 lines ...
I1017 13:44:21.649] apps.sh:224: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:21.818] (Bdeployment.apps/deployment-with-unixuserid created
I1017 13:44:21.922] apps.sh:228: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: deployment-with-unixuserid:
I1017 13:44:21.998] (Bdeployment.apps "deployment-with-unixuserid" deleted
I1017 13:44:22.084] apps.sh:235: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:22.230] (Bdeployment.apps/nginx-deployment created
W1017 13:44:22.331] E1017 13:44:21.320598   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:22.332] I1017 13:44:21.373518   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-with-command", UID:"24658c66-e25e-47dd-968d-cf7ae7530b6f", APIVersion:"apps/v1", ResourceVersion:"1940", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-with-command-757c6f58dd to 1
W1017 13:44:22.333] I1017 13:44:21.380359   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-with-command-757c6f58dd", UID:"b791cb48-4458-4ba2-a7ae-4b1112b47083", APIVersion:"apps/v1", ResourceVersion:"1941", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-with-command-757c6f58dd-gsz64
W1017 13:44:22.333] E1017 13:44:21.420755   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:22.334] E1017 13:44:21.517475   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:22.334] E1017 13:44:21.606820   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:22.335] I1017 13:44:21.820212   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"deployment-with-unixuserid", UID:"dbabc68e-5189-4404-a919-23b292100a79", APIVersion:"apps/v1", ResourceVersion:"1955", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-with-unixuserid-8fcdfc94f to 1
W1017 13:44:22.335] I1017 13:44:21.825436   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"deployment-with-unixuserid-8fcdfc94f", UID:"bf17a1bd-7987-48d1-b12f-f2dd34d2c282", APIVersion:"apps/v1", ResourceVersion:"1956", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-with-unixuserid-8fcdfc94f-6k7bp
W1017 13:44:22.336] I1017 13:44:22.235378   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"ab68cbe7-bdf0-4b6e-8abf-eb8ac85a4eb7", APIVersion:"apps/v1", ResourceVersion:"1969", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1017 13:44:22.337] I1017 13:44:22.240941   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-6986c7bc94", UID:"6f8bc0b6-e251-449d-accf-d8d16bc1f085", APIVersion:"apps/v1", ResourceVersion:"1970", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-x6dtw
W1017 13:44:22.337] I1017 13:44:22.243711   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-6986c7bc94", UID:"6f8bc0b6-e251-449d-accf-d8d16bc1f085", APIVersion:"apps/v1", ResourceVersion:"1970", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-56gln
W1017 13:44:22.337] I1017 13:44:22.246958   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-6986c7bc94", UID:"6f8bc0b6-e251-449d-accf-d8d16bc1f085", APIVersion:"apps/v1", ResourceVersion:"1970", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-kjkn6
W1017 13:44:22.338] E1017 13:44:22.321937   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:22.423] E1017 13:44:22.422725   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:22.520] E1017 13:44:22.519487   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:22.608] E1017 13:44:22.608044   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:22.709] apps.sh:239: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 3
I1017 13:44:22.709] (Bdeployment.apps "nginx-deployment" deleted
I1017 13:44:22.709] apps.sh:242: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:22.709] (Bapps.sh:246: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:22.710] (Bapps.sh:247: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:22.761] (Bdeployment.apps/nginx-deployment created
... skipping 19 lines ...
I1017 13:44:24.824] apps.sh:290: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1017 13:44:24.916] (B    Image:	k8s.gcr.io/nginx:test-cmd
I1017 13:44:25.001] apps.sh:293: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1017 13:44:25.092] (Bdeployment.apps/nginx rolled back
W1017 13:44:25.193] I1017 13:44:22.764139   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"77065bb5-aad0-4f1d-9eb9-7d258aca1086", APIVersion:"apps/v1", ResourceVersion:"1991", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7f6fc565b9 to 1
W1017 13:44:25.194] I1017 13:44:22.767544   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-7f6fc565b9", UID:"d99fc8c0-ef44-43ff-aea9-6d91c4f0e31c", APIVersion:"apps/v1", ResourceVersion:"1992", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7f6fc565b9-lbvhc
W1017 13:44:25.194] E1017 13:44:23.323307   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:25.194] E1017 13:44:23.424348   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:25.195] E1017 13:44:23.520833   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:25.195] I1017 13:44:23.546078   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"b092ebd9-9d4a-4e7c-be0d-f47b0fd95a78", APIVersion:"apps/v1", ResourceVersion:"2010", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1017 13:44:25.195] I1017 13:44:23.549659   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-6986c7bc94", UID:"ef53052f-0c97-48a7-bc72-13589eef3297", APIVersion:"apps/v1", ResourceVersion:"2011", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-blkms
W1017 13:44:25.196] I1017 13:44:23.552854   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-6986c7bc94", UID:"ef53052f-0c97-48a7-bc72-13589eef3297", APIVersion:"apps/v1", ResourceVersion:"2011", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-9phq5
W1017 13:44:25.196] I1017 13:44:23.553224   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-6986c7bc94", UID:"ef53052f-0c97-48a7-bc72-13589eef3297", APIVersion:"apps/v1", ResourceVersion:"2011", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-98c9h
W1017 13:44:25.196] E1017 13:44:23.609307   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:25.197] I1017 13:44:24.201293   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx", UID:"954627fe-7cc6-4c56-a6c1-a252e73b8e34", APIVersion:"apps/v1", ResourceVersion:"2034", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W1017 13:44:25.197] I1017 13:44:24.204641   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-f87d999f7", UID:"9feee20d-95de-4783-81d8-abcb9548ca8f", APIVersion:"apps/v1", ResourceVersion:"2035", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-nrf9f
W1017 13:44:25.197] I1017 13:44:24.207754   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-f87d999f7", UID:"9feee20d-95de-4783-81d8-abcb9548ca8f", APIVersion:"apps/v1", ResourceVersion:"2035", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-nl8tm
W1017 13:44:25.198] I1017 13:44:24.209458   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-f87d999f7", UID:"9feee20d-95de-4783-81d8-abcb9548ca8f", APIVersion:"apps/v1", ResourceVersion:"2035", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-rvhhp
W1017 13:44:25.198] E1017 13:44:24.324737   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:25.198] E1017 13:44:24.425810   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:25.199] E1017 13:44:24.522230   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:25.199] E1017 13:44:24.610638   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:25.199] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W1017 13:44:25.200] I1017 13:44:24.733029   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx", UID:"954627fe-7cc6-4c56-a6c1-a252e73b8e34", APIVersion:"apps/v1", ResourceVersion:"2048", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-78487f9fd7 to 1
W1017 13:44:25.200] I1017 13:44:24.735956   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-78487f9fd7", UID:"db774af8-ff4a-494c-904d-d125c7aa86ae", APIVersion:"apps/v1", ResourceVersion:"2049", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-78487f9fd7-nfr5r
W1017 13:44:25.326] E1017 13:44:25.326134   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:25.427] E1017 13:44:25.427047   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:25.524] E1017 13:44:25.523779   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:25.612] E1017 13:44:25.612054   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:26.186] apps.sh:297: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:44:26.379] (Bapps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:44:26.474] (Bdeployment.apps/nginx rolled back
W1017 13:44:26.575] error: unable to find specified revision 1000000 in history
W1017 13:44:26.575] E1017 13:44:26.327481   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:26.576] E1017 13:44:26.428506   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:26.576] E1017 13:44:26.524968   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:26.614] E1017 13:44:26.613929   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:27.329] E1017 13:44:27.328965   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:27.430] E1017 13:44:27.430150   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:27.527] E1017 13:44:27.526476   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:27.615] E1017 13:44:27.615348   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:27.716] apps.sh:304: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1017 13:44:27.716] (Bdeployment.apps/nginx paused
W1017 13:44:27.817] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
W1017 13:44:27.853] error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
I1017 13:44:27.953] deployment.apps/nginx resumed
I1017 13:44:28.055] deployment.apps/nginx rolled back
I1017 13:44:28.255]     deployment.kubernetes.io/revision-history: 1,3
W1017 13:44:28.356] E1017 13:44:28.330051   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:28.432] E1017 13:44:28.431650   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:28.442] error: desired revision (3) is different from the running revision (5)
W1017 13:44:28.528] E1017 13:44:28.528042   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:28.538] I1017 13:44:28.537873   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx", UID:"954627fe-7cc6-4c56-a6c1-a252e73b8e34", APIVersion:"apps/v1", ResourceVersion:"2078", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-78487f9fd7 to 0
W1017 13:44:28.543] I1017 13:44:28.542251   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-78487f9fd7", UID:"db774af8-ff4a-494c-904d-d125c7aa86ae", APIVersion:"apps/v1", ResourceVersion:"2082", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-78487f9fd7-nfr5r
W1017 13:44:28.544] I1017 13:44:28.544134   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx", UID:"954627fe-7cc6-4c56-a6c1-a252e73b8e34", APIVersion:"apps/v1", ResourceVersion:"2080", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-795cc9cbf5 to 1
W1017 13:44:28.548] I1017 13:44:28.548005   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-795cc9cbf5", UID:"d36b918d-7c43-4c65-bee1-7da74f83b35e", APIVersion:"apps/v1", ResourceVersion:"2086", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-795cc9cbf5-x9nj9
W1017 13:44:28.617] E1017 13:44:28.616838   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:28.718] deployment.apps/nginx restarted
W1017 13:44:29.332] E1017 13:44:29.331361   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:29.433] E1017 13:44:29.433094   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:29.530] E1017 13:44:29.529640   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:29.618] E1017 13:44:29.618007   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:29.719] Successful
I1017 13:44:29.719] message:apiVersion: apps/v1
I1017 13:44:29.719] kind: ReplicaSet
I1017 13:44:29.719] metadata:
I1017 13:44:29.720]   annotations:
I1017 13:44:29.720]     deployment.kubernetes.io/desired-replicas: "3"
... skipping 61 lines ...
W1017 13:44:30.381] I1017 13:44:29.873786   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx2-57b7865cd9", UID:"d8978dbc-87bc-432c-aefe-0487ee703361", APIVersion:"apps/v1", ResourceVersion:"2101", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-rqhnk
W1017 13:44:30.381] I1017 13:44:29.873834   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx2-57b7865cd9", UID:"d8978dbc-87bc-432c-aefe-0487ee703361", APIVersion:"apps/v1", ResourceVersion:"2101", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-wrhkb
W1017 13:44:30.382] I1017 13:44:30.284977   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"41ff8fdf-6e44-47e5-a702-966a19ccea21", APIVersion:"apps/v1", ResourceVersion:"2134", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
W1017 13:44:30.382] I1017 13:44:30.289758   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-598d4d68b4", UID:"3e1457d8-bd8f-4b01-b66c-e601d392969a", APIVersion:"apps/v1", ResourceVersion:"2135", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-xmdpp
W1017 13:44:30.383] I1017 13:44:30.292808   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-598d4d68b4", UID:"3e1457d8-bd8f-4b01-b66c-e601d392969a", APIVersion:"apps/v1", ResourceVersion:"2135", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-nc5h6
W1017 13:44:30.383] I1017 13:44:30.293279   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-598d4d68b4", UID:"3e1457d8-bd8f-4b01-b66c-e601d392969a", APIVersion:"apps/v1", ResourceVersion:"2135", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-j2fkx
W1017 13:44:30.383] E1017 13:44:30.332866   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:30.435] E1017 13:44:30.434470   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:30.531] E1017 13:44:30.530852   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:30.620] E1017 13:44:30.619951   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:30.649] I1017 13:44:30.648302   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"41ff8fdf-6e44-47e5-a702-966a19ccea21", APIVersion:"apps/v1", ResourceVersion:"2148", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-59df9b5f5b to 1
W1017 13:44:30.650] I1017 13:44:30.650192   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-59df9b5f5b", UID:"ae10f2be-971b-4214-a311-ce77b5b7c562", APIVersion:"apps/v1", ResourceVersion:"2149", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-59df9b5f5b-82gz5
I1017 13:44:30.751] apps.sh:337: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
I1017 13:44:30.751] (Bapps.sh:338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:44:30.752] (Bapps.sh:339: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1017 13:44:30.752] (Bdeployment.apps/nginx-deployment image updated
... skipping 5 lines ...
I1017 13:44:31.315] (Bdeployment.apps/nginx-deployment image updated
I1017 13:44:31.420] apps.sh:352: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1017 13:44:31.517] (Bapps.sh:353: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1017 13:44:31.699] (Bapps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1017 13:44:31.802] (Bapps.sh:357: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1017 13:44:31.896] (Bdeployment.apps/nginx-deployment image updated
W1017 13:44:31.997] error: unable to find container named "redis"
W1017 13:44:31.997] E1017 13:44:31.338749   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:31.998] E1017 13:44:31.437439   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:31.998] E1017 13:44:31.532493   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:31.998] E1017 13:44:31.621520   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:31.999] I1017 13:44:31.906799   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"41ff8fdf-6e44-47e5-a702-966a19ccea21", APIVersion:"apps/v1", ResourceVersion:"2167", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 2
W1017 13:44:31.999] I1017 13:44:31.912403   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"41ff8fdf-6e44-47e5-a702-966a19ccea21", APIVersion:"apps/v1", ResourceVersion:"2170", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7d758dbc54 to 1
W1017 13:44:32.000] I1017 13:44:31.913648   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-598d4d68b4", UID:"3e1457d8-bd8f-4b01-b66c-e601d392969a", APIVersion:"apps/v1", ResourceVersion:"2171", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-xmdpp
W1017 13:44:32.000] I1017 13:44:31.918199   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-7d758dbc54", UID:"b5fa1fcb-964e-42d7-84e6-f8c3c0d92bd6", APIVersion:"apps/v1", ResourceVersion:"2174", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7d758dbc54-sh7fg
I1017 13:44:32.101] apps.sh:360: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:44:32.102] (Bapps.sh:361: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:44:32.272] (Bapps.sh:364: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:44:32.362] (Bapps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:44:32.434] (Bdeployment.apps "nginx-deployment" deleted
I1017 13:44:32.527] apps.sh:371: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:32.667] (Bdeployment.apps/nginx-deployment created
W1017 13:44:32.768] I1017 13:44:32.126144   53053 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1571319848-6617
W1017 13:44:32.768] E1017 13:44:32.340612   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:32.769] E1017 13:44:32.439508   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:32.769] E1017 13:44:32.533989   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:32.770] E1017 13:44:32.623302   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:32.770] I1017 13:44:32.670517   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"df9d55d8-0fcb-4a01-93d2-8cd15616b901", APIVersion:"apps/v1", ResourceVersion:"2199", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
W1017 13:44:32.771] I1017 13:44:32.674130   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-598d4d68b4", UID:"fb39f87c-29b4-438c-b80d-d856a97a8ef9", APIVersion:"apps/v1", ResourceVersion:"2200", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-hbvgc
W1017 13:44:32.771] I1017 13:44:32.676892   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-598d4d68b4", UID:"fb39f87c-29b4-438c-b80d-d856a97a8ef9", APIVersion:"apps/v1", ResourceVersion:"2200", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-tpptw
W1017 13:44:32.772] I1017 13:44:32.677722   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-598d4d68b4", UID:"fb39f87c-29b4-438c-b80d-d856a97a8ef9", APIVersion:"apps/v1", ResourceVersion:"2200", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-kv4rv
I1017 13:44:32.872] configmap/test-set-env-config created
I1017 13:44:32.980] secret/test-set-env-secret created
... skipping 3 lines ...
I1017 13:44:33.374] (Bdeployment.apps/nginx-deployment env updated
I1017 13:44:33.466] apps.sh:383: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2
I1017 13:44:33.553] (Bapps.sh:385: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1
I1017 13:44:33.642] (Bdeployment.apps/nginx-deployment env updated
I1017 13:44:33.744] apps.sh:389: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 2
I1017 13:44:33.829] (Bdeployment.apps/nginx-deployment env updated
W1017 13:44:33.930] E1017 13:44:33.342025   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:33.930] I1017 13:44:33.378215   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"df9d55d8-0fcb-4a01-93d2-8cd15616b901", APIVersion:"apps/v1", ResourceVersion:"2215", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6b9f7756b4 to 1
W1017 13:44:33.931] I1017 13:44:33.382194   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-6b9f7756b4", UID:"b383b82f-92fb-4c63-bbaa-789014048023", APIVersion:"apps/v1", ResourceVersion:"2216", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6b9f7756b4-hgb2v
W1017 13:44:33.931] E1017 13:44:33.440754   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:33.931] E1017 13:44:33.535364   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:33.931] E1017 13:44:33.624578   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:33.932] I1017 13:44:33.651806   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"df9d55d8-0fcb-4a01-93d2-8cd15616b901", APIVersion:"apps/v1", ResourceVersion:"2226", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 2
W1017 13:44:33.932] I1017 13:44:33.657474   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-598d4d68b4", UID:"fb39f87c-29b4-438c-b80d-d856a97a8ef9", APIVersion:"apps/v1", ResourceVersion:"2230", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-hbvgc
W1017 13:44:33.933] I1017 13:44:33.657787   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"df9d55d8-0fcb-4a01-93d2-8cd15616b901", APIVersion:"apps/v1", ResourceVersion:"2229", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-754bf964c8 to 1
W1017 13:44:33.933] I1017 13:44:33.661506   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-754bf964c8", UID:"4e5e7a73-b639-4a02-a2b9-6f392ea51818", APIVersion:"apps/v1", ResourceVersion:"2234", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-754bf964c8-7cjf4
W1017 13:44:33.933] I1017 13:44:33.858532   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"df9d55d8-0fcb-4a01-93d2-8cd15616b901", APIVersion:"apps/v1", ResourceVersion:"2247", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 1
W1017 13:44:33.934] I1017 13:44:33.864796   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-598d4d68b4", UID:"fb39f87c-29b4-438c-b80d-d856a97a8ef9", APIVersion:"apps/v1", ResourceVersion:"2251", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-kv4rv
... skipping 7 lines ...
I1017 13:44:34.069] deployment.apps/nginx-deployment env updated
I1017 13:44:34.162] deployment.apps/nginx-deployment env updated
W1017 13:44:34.263] I1017 13:44:34.080583   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"df9d55d8-0fcb-4a01-93d2-8cd15616b901", APIVersion:"apps/v1", ResourceVersion:"2287", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-5958f7687 to 0
W1017 13:44:34.263] I1017 13:44:34.104743   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-5958f7687", UID:"ef84f2e1-85aa-445f-b37c-295b4daf0c8e", APIVersion:"apps/v1", ResourceVersion:"2291", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5958f7687-z6pgz
W1017 13:44:34.264] I1017 13:44:34.229078   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319859-328", Name:"nginx-deployment", UID:"df9d55d8-0fcb-4a01-93d2-8cd15616b901", APIVersion:"apps/v1", ResourceVersion:"2289", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-98b7fd455 to 1
W1017 13:44:34.301] I1017 13:44:34.300806   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319859-328", Name:"nginx-deployment-98b7fd455", UID:"26cf6aa9-3d3e-4888-a92d-63e6b3095019", APIVersion:"apps/v1", ResourceVersion:"2298", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-98b7fd455-dk9kp
W1017 13:44:34.343] E1017 13:44:34.343240   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:34.442] E1017 13:44:34.441809   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:34.500] E1017 13:44:34.499394   53053 replica_set.go:450] Sync "namespace-1571319859-328/nginx-deployment-5958f7687" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5958f7687": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1571319859-328/nginx-deployment-5958f7687, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ef84f2e1-85aa-445f-b37c-295b4daf0c8e, UID in object meta: 
W1017 13:44:34.537] E1017 13:44:34.537125   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:34.599] E1017 13:44:34.598616   53053 replica_set.go:450] Sync "namespace-1571319859-328/nginx-deployment-98b7fd455" failed with replicasets.apps "nginx-deployment-98b7fd455" not found
W1017 13:44:34.626] E1017 13:44:34.625848   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:34.649] E1017 13:44:34.649029   53053 replica_set.go:450] Sync "namespace-1571319859-328/nginx-deployment-868b664cb5" failed with replicasets.apps "nginx-deployment-868b664cb5" not found
I1017 13:44:34.750] deployment.apps/nginx-deployment env updated
I1017 13:44:34.750] deployment.apps "nginx-deployment" deleted
I1017 13:44:34.750] configmap "test-set-env-config" deleted
I1017 13:44:34.750] secret "test-set-env-secret" deleted
I1017 13:44:34.750] +++ exit code: 0
I1017 13:44:34.750] Recording: run_rs_tests
... skipping 16 lines ...
I1017 13:44:35.517] apps.sh:527: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I1017 13:44:35.520] (B+++ [1017 13:44:35] Deleting rs
I1017 13:44:35.596] replicaset.apps "frontend-no-cascade" deleted
W1017 13:44:35.697] I1017 13:44:35.004365   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319874-9305", Name:"frontend", UID:"fa973a7b-e5c4-4209-98f3-6d8e4b8115a2", APIVersion:"apps/v1", ResourceVersion:"2324", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-sp6ct
W1017 13:44:35.698] I1017 13:44:35.007222   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319874-9305", Name:"frontend", UID:"fa973a7b-e5c4-4209-98f3-6d8e4b8115a2", APIVersion:"apps/v1", ResourceVersion:"2324", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mpnbr
W1017 13:44:35.699] I1017 13:44:35.007288   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319874-9305", Name:"frontend", UID:"fa973a7b-e5c4-4209-98f3-6d8e4b8115a2", APIVersion:"apps/v1", ResourceVersion:"2324", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jrm5c
W1017 13:44:35.699] E1017 13:44:35.098595   53053 replica_set.go:450] Sync "namespace-1571319874-9305/frontend" failed with replicasets.apps "frontend" not found
W1017 13:44:35.700] E1017 13:44:35.344844   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:35.700] I1017 13:44:35.419619   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319874-9305", Name:"frontend-no-cascade", UID:"bec321f5-4ef7-4603-a844-276964b90feb", APIVersion:"apps/v1", ResourceVersion:"2339", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-h7gdf
W1017 13:44:35.701] I1017 13:44:35.422381   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319874-9305", Name:"frontend-no-cascade", UID:"bec321f5-4ef7-4603-a844-276964b90feb", APIVersion:"apps/v1", ResourceVersion:"2339", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-pfr9j
W1017 13:44:35.702] I1017 13:44:35.423212   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319874-9305", Name:"frontend-no-cascade", UID:"bec321f5-4ef7-4603-a844-276964b90feb", APIVersion:"apps/v1", ResourceVersion:"2339", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-rfjxt
W1017 13:44:35.702] E1017 13:44:35.443425   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:35.703] E1017 13:44:35.538407   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:35.703] E1017 13:44:35.627284   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:35.703] E1017 13:44:35.698308   53053 replica_set.go:450] Sync "namespace-1571319874-9305/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
I1017 13:44:35.804] apps.sh:531: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:35.818] (Bapps.sh:533: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I1017 13:44:35.904] (Bpod "frontend-no-cascade-h7gdf" deleted
I1017 13:44:35.909] pod "frontend-no-cascade-pfr9j" deleted
I1017 13:44:35.914] pod "frontend-no-cascade-rfjxt" deleted
I1017 13:44:36.003] apps.sh:536: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:36.086] (Bapps.sh:540: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:44:36.246] (Breplicaset.apps/frontend created
I1017 13:44:36.350] apps.sh:544: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1017 13:44:36.454] (B
I1017 13:44:36.458] apps.sh:546: FAIL!
I1017 13:44:36.458] Describe rs frontend
I1017 13:44:36.458]   Expected Match: Name:
I1017 13:44:36.458]   Not found in:
I1017 13:44:36.458] Name:         frontend
I1017 13:44:36.458] Namespace:    namespace-1571319874-9305
I1017 13:44:36.459] Selector:     app=guestbook,tier=frontend
I1017 13:44:36.459] Labels:       app=guestbook
I1017 13:44:36.459]               tier=frontend
I1017 13:44:36.459] Annotations:  <none>
I1017 13:44:36.459] Replicas:     3 current / 3 desired
I1017 13:44:36.459] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:36.460] Pod Template:
I1017 13:44:36.460]   Labels:  app=guestbook
I1017 13:44:36.460]            tier=frontend
I1017 13:44:36.460]   Containers:
I1017 13:44:36.460]    php-redis:
I1017 13:44:36.460]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 20 lines ...
I1017 13:44:36.566] Namespace:    namespace-1571319874-9305
I1017 13:44:36.566] Selector:     app=guestbook,tier=frontend
I1017 13:44:36.566] Labels:       app=guestbook
I1017 13:44:36.566]               tier=frontend
I1017 13:44:36.566] Annotations:  <none>
I1017 13:44:36.566] Replicas:     3 current / 3 desired
I1017 13:44:36.567] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:36.567] Pod Template:
I1017 13:44:36.567]   Labels:  app=guestbook
I1017 13:44:36.567]            tier=frontend
I1017 13:44:36.567]   Containers:
I1017 13:44:36.567]    php-redis:
I1017 13:44:36.567]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 13 lines ...
I1017 13:44:36.569]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-nw4v5
I1017 13:44:36.569]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-6vfrl
I1017 13:44:36.569] (B
W1017 13:44:36.669] I1017 13:44:36.252274   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319874-9305", Name:"frontend", UID:"413ca8e6-6ce0-4193-a18c-fae34932e5f8", APIVersion:"apps/v1", ResourceVersion:"2362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-lsrtv
W1017 13:44:36.670] I1017 13:44:36.257357   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319874-9305", Name:"frontend", UID:"413ca8e6-6ce0-4193-a18c-fae34932e5f8", APIVersion:"apps/v1", ResourceVersion:"2362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6vfrl
W1017 13:44:36.670] I1017 13:44:36.257755   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319874-9305", Name:"frontend", UID:"413ca8e6-6ce0-4193-a18c-fae34932e5f8", APIVersion:"apps/v1", ResourceVersion:"2362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nw4v5
W1017 13:44:36.670] E1017 13:44:36.346229   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:36.670] E1017 13:44:36.445473   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:36.671] E1017 13:44:36.539758   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:36.671] E1017 13:44:36.628656   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:44:36.771] apps.sh:550: Successful describe
I1017 13:44:36.772] Name:         frontend
I1017 13:44:36.772] Namespace:    namespace-1571319874-9305
I1017 13:44:36.772] Selector:     app=guestbook,tier=frontend
I1017 13:44:36.772] Labels:       app=guestbook
I1017 13:44:36.772]               tier=frontend
I1017 13:44:36.772] Annotations:  <none>
I1017 13:44:36.772] Replicas:     3 current / 3 desired
I1017 13:44:36.772] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:36.772] Pod Template:
I1017 13:44:36.772]   Labels:  app=guestbook
I1017 13:44:36.773]            tier=frontend
I1017 13:44:36.773]   Containers:
I1017 13:44:36.773]    php-redis:
I1017 13:44:36.773]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I1017 13:44:36.786] Namespace:    namespace-1571319874-9305
I1017 13:44:36.787] Selector:     app=guestbook,tier=frontend
I1017 13:44:36.787] Labels:       app=guestbook
I1017 13:44:36.787]               tier=frontend
I1017 13:44:36.787] Annotations:  <none>
I1017 13:44:36.787] Replicas:     3 current / 3 desired
I1017 13:44:36.787] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:36.787] Pod Template:
I1017 13:44:36.787]   Labels:  app=guestbook
I1017 13:44:36.787]            tier=frontend
I1017 13:44:36.787]   Containers:
I1017 13:44:36.787]    php-redis:
I1017 13:44:36.787]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I1017 13:44:36.788]   ----    ------            ----  ----                   -------
I1017 13:44:36.789]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-lsrtv
I1017 13:44:36.789]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-nw4v5
I1017 13:44:36.789]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-6vfrl
I1017 13:44:36.789] (B
I1017 13:44:36.887] 
I1017 13:44:36.887] FAIL!
I1017 13:44:36.887] Describe rs
I1017 13:44:36.888]   Expected Match: Name:
I1017 13:44:36.888]   Not found in:
I1017 13:44:36.888] Name:         frontend
I1017 13:44:36.888] Namespace:    namespace-1571319874-9305
I1017 13:44:36.888] Selector:     app=guestbook,tier=frontend
I1017 13:44:36.888] Labels:       app=guestbook
I1017 13:44:36.888]               tier=frontend
I1017 13:44:36.888] Annotations:  <none>
I1017 13:44:36.889] Replicas:     3 current / 3 desired
I1017 13:44:36.889] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:36.889] Pod Template:
I1017 13:44:36.889]   Labels:  app=guestbook
I1017 13:44:36.889]            tier=frontend
I1017 13:44:36.889]   Containers:
I1017 13:44:36.889]    php-redis:
I1017 13:44:36.889]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 20 lines ...
I1017 13:44:36.990] Namespace:    namespace-1571319874-9305
I1017 13:44:36.990] Selector:     app=guestbook,tier=frontend
I1017 13:44:36.990] Labels:       app=guestbook
I1017 13:44:36.990]               tier=frontend
I1017 13:44:36.990] Annotations:  <none>
I1017 13:44:36.990] Replicas:     3 current / 3 desired
I1017 13:44:36.990] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:36.990] Pod Template:
I1017 13:44:36.991]   Labels:  app=guestbook
I1017 13:44:36.991]            tier=frontend
I1017 13:44:36.991]   Containers:
I1017 13:44:36.991]    php-redis:
I1017 13:44:36.991]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I1017 13:44:37.085] Namespace:    namespace-1571319874-9305
I1017 13:44:37.085] Selector:     app=guestbook,tier=frontend
I1017 13:44:37.085] Labels:       app=guestbook
I1017 13:44:37.085]               tier=frontend
I1017 13:44:37.085] Annotations:  <none>
I1017 13:44:37.086] Replicas:     3 current / 3 desired
I1017 13:44:37.086] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:37.086] Pod Template:
I1017 13:44:37.086]   Labels:  app=guestbook
I1017 13:44:37.086]            tier=frontend
I1017 13:44:37.086]   Containers:
I1017 13:44:37.086]    php-redis:
I1017 13:44:37.086]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I1017 13:44:37.192] Namespace:    namespace-1571319874-9305
I1017 13:44:37.193] Selector:     app=guestbook,tier=frontend
I1017 13:44:37.193] Labels:       app=guestbook
I1017 13:44:37.193]               tier=frontend
I1017 13:44:37.193] Annotations:  <none>
I1017 13:44:37.193] Replicas:     3 current / 3 desired
I1017 13:44:37.193] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:44:37.193] Pod Template:
I1017 13:44:37.193]   Labels:  app=guestbook
I1017 13:44:37.194]            tier=frontend
I1017 13:44:37.194]   Containers:
I1017 13:44:37.194]    php-redis:
I1017 13:44:37.194]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 10 lines ...
I1017 13:44:37.195]   Type    Reason            Age   From                   Message
I1017 13:44:37.195]   ----    ------            ----  ----                   -------
I1017 13:44:37.195]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-lsrtv
I1017 13:44:37.196]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-nw4v5
I1017 13:44:37.196]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-6vfrl
I1017 13:44:37.306] (B
I1017 13:44:37.306] FAIL!
I1017 13:44:37.306] Describe pods
I1017 13:44:37.306]   Expected Match: Name:
I1017 13:44:37.306]   Not found in:
I1017 13:44:37.306] Name:           frontend-6vfrl
I1017 13:44:37.306] Namespace:      namespace-1571319874-9305
I1017 13:44:37.306] Priority:       0
... skipping 83 lines ...
I1017 13:44:37.313] 562 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/apps.sh
I1017 13:44:37.313] (B
I1017 13:44:37.391] apps.sh:566: Successful get rs frontend {{.spec.replicas}}: 3
I1017 13:44:37.478] (Breplicaset.apps/frontend scaled
I1017 13:44:37.571] apps.sh:570: Successful get rs frontend {{.spec.replicas}}: 2
I1017 13:44:37.720] (Bdeployment.apps/scale-1 created
W1017 13:44:37.821] E1017 13:44:37.347481   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:37.821] E1017 13:44:37.446703   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:37.822] I1017 13:44:37.482876   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319874-9305", Name:"frontend", UID:"413ca8e6-6ce0-4193-a18c-fae34932e5f8", APIVersion:"apps/v1", ResourceVersion:"2371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-6vfrl
W1017 13:44:37.822] E1017 13:44:37.541330   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:37.822] E1017 13:44:37.630158   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:37.822] I1017 13:44:37.722952   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319874-9305", Name:"scale-1", UID:"0c07a605-8791-4780-8f1a-1a0a54785d75", APIVersion:"apps/v1", ResourceVersion:"2378", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 1
W1017 13:44:37.823] I1017 13:44:37.725933   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319874-9305", Name:"scale-1-5c5565bcd9", UID:"1b1d597f-5177-4129-a409-c481e8f1ef7f", APIVersion:"apps/v1", ResourceVersion:"2379", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-zvksh
W1017 13:44:37.883] I1017 13:44:37.882881   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319874-9305", Name:"scale-2", UID:"27086bad-7d76-4e60-ace4-e79da6815240", APIVersion:"apps/v1", ResourceVersion:"2388", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 1
W1017 13:44:37.887] I1017 13:44:37.886380   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319874-9305", Name:"scale-2-5c5565bcd9", UID:"e1ac8070-bfab-4be0-82f9-31096d081fa3", APIVersion:"apps/v1", ResourceVersion:"2389", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-rv2w7
I1017 13:44:37.987] deployment.apps/scale-2 created
I1017 13:44:38.034] deployment.apps/scale-3 created
... skipping 14 lines ...
I1017 13:44:39.111] (Breplicaset.apps "frontend" deleted
I1017 13:44:39.199] deployment.apps "scale-1" deleted
I1017 13:44:39.206] deployment.apps "scale-2" deleted
I1017 13:44:39.225] deployment.apps "scale-3" deleted
W1017 13:44:39.326] I1017 13:44:38.037913   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319874-9305", Name:"scale-3", UID:"6d3410a4-57e2-4a13-80bf-41b398484137", APIVersion:"apps/v1", ResourceVersion:"2398", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-3-5c5565bcd9 to 1
W1017 13:44:39.327] I1017 13:44:38.040432   53053 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571319874-9305", Name:"scale-3-5c5565bcd9", UID:"f8ace3bb-40c4-4522-a83c-9ad89473053b", APIVersion:"apps/v1", ResourceVersion:"2399", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5c5565bcd9-gf9f2
W1017 13:44:39.327] E1017 13:44:38.349194   53053 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:44:39.328] I1017 13:44:38.404220   53053 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571319874-9305", Name:"scale-1", UID:"0c07a605-8791-4780-8f1a-1a0a54785d75", APIVersion:"apps/v1"