This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 1 failed / 2898 succeeded
Started2019-10-17 12:50
Elapsed33m31s
Revision
Buildergke-prow-ssd-pool-1a225945-z2ft
Refs master:534051ac
82703:d79abe85
pod93a43cef-f0dc-11e9-a838-6e6be72da8b3
infra-commit1ed1b45c6
pod93a43cef-f0dc-11e9-a838-6e6be72da8b3
repok8s.io/kubernetes
repo-commit8948ef4f293a8edc2e149e536d7ebb2ec7232e30
repos{u'k8s.io/kubernetes': u'master:534051acec00ab0dcaea502cc3bf410ba32c7b27,82703:d79abe85fde8ea2dc86c92071a8be8f76bfee771'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestSchedulerCreationFromConfigMap 4.17s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestSchedulerCreationFromConfigMap$
=== RUN   TestSchedulerCreationFromConfigMap
W1017 13:21:59.375679  108740 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1017 13:21:59.375788  108740 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I1017 13:21:59.375844  108740 master.go:305] Node port range unspecified. Defaulting to 30000-32767.
I1017 13:21:59.375934  108740 master.go:261] Using reconciler: 
I1017 13:21:59.378267  108740 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.378606  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.378859  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.379858  108740 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1017 13:21:59.380002  108740 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.380354  108740 reflector.go:185] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1017 13:21:59.380804  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.380910  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.381701  108740 store.go:1342] Monitoring events count at <storage-prefix>//events
I1017 13:21:59.381781  108740 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.382516  108740 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1017 13:21:59.393870  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.393909  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.393923  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.394943  108740 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1017 13:21:59.395005  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.395025  108740 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.395158  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.395188  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.395202  108740 reflector.go:185] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1017 13:21:59.396259  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.396272  108740 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1017 13:21:59.396393  108740 reflector.go:185] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1017 13:21:59.396904  108740 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.397127  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.397158  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.397403  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.398623  108740 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1017 13:21:59.398985  108740 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.399222  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.399292  108740 reflector.go:185] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1017 13:21:59.399372  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.400102  108740 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1017 13:21:59.400166  108740 reflector.go:185] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1017 13:21:59.400275  108740 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.400476  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.400498  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.401128  108740 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1017 13:21:59.401215  108740 reflector.go:185] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1017 13:21:59.401356  108740 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.401543  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.401563  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.401567  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.402747  108740 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1017 13:21:59.402931  108740 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.403025  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.403156  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.403189  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.403261  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.403263  108740 reflector.go:185] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1017 13:21:59.404114  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.404164  108740 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1017 13:21:59.404235  108740 reflector.go:185] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1017 13:21:59.404318  108740 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.404467  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.404500  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.405775  108740 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1017 13:21:59.405909  108740 reflector.go:185] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1017 13:21:59.405920  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.406309  108740 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.406491  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.406508  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.406650  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.407595  108740 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1017 13:21:59.407666  108740 reflector.go:185] Listing and watching *core.Node from storage/cacher.go:/minions
I1017 13:21:59.407850  108740 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.407984  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.408005  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.408776  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.408874  108740 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1017 13:21:59.408966  108740 reflector.go:185] Listing and watching *core.Pod from storage/cacher.go:/pods
I1017 13:21:59.409011  108740 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.409115  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.409127  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.409572  108740 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1017 13:21:59.409635  108740 reflector.go:185] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1017 13:21:59.409695  108740 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.409883  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.409897  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.410754  108740 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1017 13:21:59.410794  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.410806  108740 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.410875  108740 reflector.go:185] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1017 13:21:59.410954  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.410968  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.411491  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.411512  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.411693  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.413065  108740 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.413105  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.413224  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.413244  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.413807  108740 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1017 13:21:59.413835  108740 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1017 13:21:59.413851  108740 reflector.go:185] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1017 13:21:59.414242  108740 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.414449  108740 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.415197  108740 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.415385  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.415858  108740 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.416799  108740 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.417422  108740 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.418031  108740 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.418156  108740 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.418576  108740 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.419053  108740 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.419879  108740 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.420219  108740 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.421310  108740 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.421571  108740 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.422300  108740 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.422664  108740 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.423395  108740 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.423696  108740 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.423947  108740 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.424172  108740 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.424439  108740 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.424770  108740 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.425145  108740 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.425946  108740 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.426304  108740 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.427272  108740 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.428161  108740 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.428663  108740 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.429056  108740 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.430258  108740 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.430714  108740 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.432596  108740 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.433575  108740 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.434298  108740 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.435323  108740 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.435702  108740 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.435933  108740 master.go:453] Skipping disabled API group "auditregistration.k8s.io".
I1017 13:21:59.436034  108740 master.go:464] Enabling API group "authentication.k8s.io".
I1017 13:21:59.436122  108740 master.go:464] Enabling API group "authorization.k8s.io".
I1017 13:21:59.436384  108740 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.436643  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.436792  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.437858  108740 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1017 13:21:59.437920  108740 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1017 13:21:59.438019  108740 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.438143  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.438174  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.438825  108740 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1017 13:21:59.438886  108740 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1017 13:21:59.439249  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.439109  108740 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.440118  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.440620  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.440649  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.441471  108740 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1017 13:21:59.441519  108740 master.go:464] Enabling API group "autoscaling".
I1017 13:21:59.441536  108740 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1017 13:21:59.441679  108740 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.441888  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.441912  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.442277  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.442588  108740 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1017 13:21:59.442636  108740 reflector.go:185] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1017 13:21:59.442771  108740 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.442930  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.442956  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.444210  108740 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1017 13:21:59.444249  108740 master.go:464] Enabling API group "batch".
I1017 13:21:59.444254  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.444380  108740 reflector.go:185] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1017 13:21:59.444404  108740 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.445035  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.445059  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.445831  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.446049  108740 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1017 13:21:59.446077  108740 master.go:464] Enabling API group "certificates.k8s.io".
I1017 13:21:59.446185  108740 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.446266  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.446283  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.446355  108740 reflector.go:185] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1017 13:21:59.446868  108740 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1017 13:21:59.447039  108740 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.447058  108740 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1017 13:21:59.447186  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.447205  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.447339  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.447983  108740 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1017 13:21:59.448033  108740 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1017 13:21:59.448112  108740 master.go:464] Enabling API group "coordination.k8s.io".
I1017 13:21:59.448165  108740 master.go:453] Skipping disabled API group "discovery.k8s.io".
I1017 13:21:59.448317  108740 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.448411  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.448519  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.448778  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.449330  108740 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1017 13:21:59.449356  108740 master.go:464] Enabling API group "extensions".
I1017 13:21:59.449366  108740 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1017 13:21:59.449543  108740 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.449656  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.449674  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.450221  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.450713  108740 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1017 13:21:59.450829  108740 reflector.go:185] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1017 13:21:59.450980  108740 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.451098  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.451103  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.451127  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.452263  108740 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1017 13:21:59.452286  108740 master.go:464] Enabling API group "networking.k8s.io".
I1017 13:21:59.452292  108740 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1017 13:21:59.452369  108740 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.452477  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.452488  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.452495  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.453797  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.453820  108740 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1017 13:21:59.453836  108740 master.go:464] Enabling API group "node.k8s.io".
I1017 13:21:59.453876  108740 reflector.go:185] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1017 13:21:59.453955  108740 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.454055  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.454076  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.454661  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.454885  108740 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1017 13:21:59.454927  108740 reflector.go:185] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1017 13:21:59.455033  108740 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.455157  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.455174  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.455560  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.455959  108740 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1017 13:21:59.456068  108740 master.go:464] Enabling API group "policy".
I1017 13:21:59.456176  108740 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.456085  108740 reflector.go:185] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1017 13:21:59.456607  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.456844  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.457056  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.457628  108740 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1017 13:21:59.457685  108740 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1017 13:21:59.457835  108740 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.457946  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.457969  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.458875  108740 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1017 13:21:59.458938  108740 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1017 13:21:59.458940  108740 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.459050  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.459078  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.459553  108740 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1017 13:21:59.459575  108740 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1017 13:21:59.459799  108740 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.459960  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.459986  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.460230  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.460446  108740 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1017 13:21:59.460487  108740 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1017 13:21:59.460520  108740 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.460666  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.460695  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.461490  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.462022  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.462079  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.462299  108740 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1017 13:21:59.462448  108740 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.462555  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.462574  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.462642  108740 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1017 13:21:59.463475  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.464064  108740 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1017 13:21:59.464126  108740 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.464146  108740 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1017 13:21:59.464239  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.464262  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.465430  108740 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1017 13:21:59.465541  108740 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1017 13:21:59.466106  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.466336  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.467307  108740 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.467478  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.467567  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.468700  108740 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1017 13:21:59.468885  108740 master.go:464] Enabling API group "rbac.authorization.k8s.io".
I1017 13:21:59.468831  108740 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1017 13:21:59.470399  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.472962  108740 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.473151  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.473412  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.474216  108740 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1017 13:21:59.474367  108740 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1017 13:21:59.474571  108740 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.474856  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.475236  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.475596  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.476639  108740 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1017 13:21:59.476664  108740 master.go:464] Enabling API group "scheduling.k8s.io".
I1017 13:21:59.476734  108740 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1017 13:21:59.476846  108740 master.go:453] Skipping disabled API group "settings.k8s.io".
I1017 13:21:59.477046  108740 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.477230  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.477266  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.477930  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.478575  108740 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1017 13:21:59.478633  108740 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1017 13:21:59.479085  108740 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.479276  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.479389  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.479481  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.480212  108740 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1017 13:21:59.480272  108740 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.480371  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.480385  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.480441  108740 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1017 13:21:59.481230  108740 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1017 13:21:59.481267  108740 reflector.go:185] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1017 13:21:59.481287  108740 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.481435  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.481462  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.481503  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.482537  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.482552  108740 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1017 13:21:59.482670  108740 reflector.go:185] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1017 13:21:59.483032  108740 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.483238  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.483307  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.483619  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.484219  108740 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1017 13:21:59.484408  108740 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.484528  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.484546  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.484617  108740 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1017 13:21:59.485284  108740 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1017 13:21:59.485310  108740 master.go:464] Enabling API group "storage.k8s.io".
I1017 13:21:59.485481  108740 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.485599  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.485617  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.485694  108740 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1017 13:21:59.486446  108740 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1017 13:21:59.486484  108740 reflector.go:185] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1017 13:21:59.486625  108740 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.486747  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.486768  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.487177  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.487265  108740 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1017 13:21:59.487412  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.487411  108740 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.487523  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.487539  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.487554  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.487603  108740 reflector.go:185] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1017 13:21:59.488686  108740 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1017 13:21:59.488861  108740 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.488990  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.489010  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.489086  108740 reflector.go:185] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1017 13:21:59.489246  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.490086  108740 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1017 13:21:59.490129  108740 reflector.go:185] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1017 13:21:59.490253  108740 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.490442  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.490464  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.491009  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.491163  108740 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1017 13:21:59.491186  108740 master.go:464] Enabling API group "apps".
I1017 13:21:59.491193  108740 reflector.go:185] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1017 13:21:59.491258  108740 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.491465  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.491490  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.491494  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.492308  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.492664  108740 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1017 13:21:59.492747  108740 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.492807  108740 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1017 13:21:59.492891  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.492909  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.493416  108740 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1017 13:21:59.493499  108740 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.493629  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.493646  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.493820  108740 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1017 13:21:59.493951  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.494640  108740 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1017 13:21:59.494681  108740 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1017 13:21:59.494788  108740 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.494915  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.494942  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.494983  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.495557  108740 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1017 13:21:59.495577  108740 master.go:464] Enabling API group "admissionregistration.k8s.io".
I1017 13:21:59.495583  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.495627  108740 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.496093  108740 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1017 13:21:59.496182  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:21:59.496203  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:21:59.496890  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.497023  108740 store.go:1342] Monitoring events count at <storage-prefix>//events
I1017 13:21:59.497043  108740 master.go:464] Enabling API group "events.k8s.io".
I1017 13:21:59.497085  108740 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1017 13:21:59.497294  108740 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.497467  108740 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.497747  108740 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.497901  108740 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.498060  108740 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.498184  108740 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.498462  108740 watch_cache.go:451] Replace watchCache (rev: 48828) 
I1017 13:21:59.498611  108740 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.498773  108740 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.498913  108740 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.499052  108740 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.499815  108740 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.500128  108740 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.500823  108740 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.501104  108740 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.501988  108740 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.502280  108740 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.502918  108740 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.503176  108740 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.503870  108740 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.504126  108740 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 13:21:59.504185  108740 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1017 13:21:59.504761  108740 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.504926  108740 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.505090  108740 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.505705  108740 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.506416  108740 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.507198  108740 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.507419  108740 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.508281  108740 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.508986  108740 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.509259  108740 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.509872  108740 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 13:21:59.509953  108740 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1017 13:21:59.510654  108740 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.511147  108740 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.511584  108740 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.512244  108740 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.512833  108740 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.513514  108740 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.514237  108740 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.514865  108740 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.515372  108740 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.516168  108740 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.516812  108740 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 13:21:59.516943  108740 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1017 13:21:59.517511  108740 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.518132  108740 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 13:21:59.518303  108740 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1017 13:21:59.518855  108740 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.519457  108740 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.519746  108740 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.521331  108740 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.521794  108740 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.522197  108740 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.522583  108740 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 13:21:59.522640  108740 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1017 13:21:59.523567  108740 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.524316  108740 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.524545  108740 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.525385  108740 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.525760  108740 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.526019  108740 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.526606  108740 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.526840  108740 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.527111  108740 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.527692  108740 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.528020  108740 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.528245  108740 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1017 13:21:59.528380  108740 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1017 13:21:59.528401  108740 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1017 13:21:59.529082  108740 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.529534  108740 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.530061  108740 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.530572  108740 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.531237  108740 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8f90a16b-662c-477c-865c-00c5f03d6575", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1017 13:21:59.534216  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:21:59.534249  108740 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1017 13:21:59.534260  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:21:59.534271  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:21:59.534281  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:21:59.534289  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:21:59.534320  108740 httplog.go:90] GET /healthz: (245.561µs) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:21:59.536193  108740 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.880582ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37540]
I1017 13:21:59.538978  108740 httplog.go:90] GET /api/v1/services: (1.104843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37540]
I1017 13:21:59.543280  108740 httplog.go:90] GET /api/v1/services: (1.051877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37540]
I1017 13:21:59.545649  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:21:59.545887  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:21:59.545948  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:21:59.545974  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:21:59.545982  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:21:59.546017  108740 httplog.go:90] GET /healthz: (459.29µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37540]
I1017 13:21:59.547923  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.278267ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:21:59.549001  108740 httplog.go:90] GET /api/v1/services: (2.453483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37540]
I1017 13:21:59.550112  108740 httplog.go:90] GET /api/v1/services: (1.671822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:21:59.553320  108740 httplog.go:90] POST /api/v1/namespaces: (4.622428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37542]
I1017 13:21:59.555129  108740 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.333511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:21:59.557267  108740 httplog.go:90] POST /api/v1/namespaces: (1.635546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:21:59.558608  108740 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.029493ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:21:59.560855  108740 httplog.go:90] POST /api/v1/namespaces: (1.751371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:21:59.635216  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:21:59.635258  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:21:59.635313  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:21:59.635327  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:21:59.635336  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:21:59.635410  108740 httplog.go:90] GET /healthz: (349.03µs) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:21:59.646903  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:21:59.647156  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:21:59.647235  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:21:59.647311  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:21:59.647406  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:21:59.647571  108740 httplog.go:90] GET /healthz: (858.183µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:21:59.735582  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:21:59.735623  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:21:59.735637  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:21:59.735647  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:21:59.735655  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:21:59.735687  108740 httplog.go:90] GET /healthz: (330.161µs) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:21:59.747185  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:21:59.747264  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:21:59.747277  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:21:59.747287  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:21:59.747296  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:21:59.747346  108740 httplog.go:90] GET /healthz: (636.564µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:21:59.835175  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:21:59.835204  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:21:59.835216  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:21:59.835283  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:21:59.835292  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:21:59.835328  108740 httplog.go:90] GET /healthz: (412.751µs) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:21:59.846965  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:21:59.846996  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:21:59.847008  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:21:59.847020  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:21:59.847025  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:21:59.847048  108740 httplog.go:90] GET /healthz: (240.839µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:21:59.935225  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:21:59.935258  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:21:59.935268  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:21:59.935274  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:21:59.935280  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:21:59.935312  108740 httplog.go:90] GET /healthz: (208.985µs) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:21:59.946937  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:21:59.946975  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:21:59.947022  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:21:59.947029  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:21:59.947034  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:21:59.947068  108740 httplog.go:90] GET /healthz: (355.169µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.035077  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:22:00.035109  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.035119  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:22:00.035125  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:22:00.035131  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:22:00.035169  108740 httplog.go:90] GET /healthz: (225.834µs) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:00.046962  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:22:00.046992  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.047000  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:22:00.047005  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:22:00.047011  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:22:00.047032  108740 httplog.go:90] GET /healthz: (198.673µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.135178  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:22:00.135215  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.135227  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:22:00.135237  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:22:00.135245  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:22:00.135284  108740 httplog.go:90] GET /healthz: (256.063µs) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:00.146952  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:22:00.146996  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.147007  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:22:00.147016  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:22:00.147025  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:22:00.147070  108740 httplog.go:90] GET /healthz: (363.19µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.235110  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:22:00.235158  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.235168  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:22:00.235174  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:22:00.235180  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:22:00.235221  108740 httplog.go:90] GET /healthz: (269.595µs) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:00.247105  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:22:00.247162  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.247174  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:22:00.247184  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:22:00.247192  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:22:00.247257  108740 httplog.go:90] GET /healthz: (371.731µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.335292  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:22:00.335464  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.335537  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:22:00.335601  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:22:00.335641  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:22:00.335818  108740 httplog.go:90] GET /healthz: (733.023µs) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:00.346921  108740 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1017 13:22:00.346970  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.346980  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:22:00.346986  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:22:00.346991  108740 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:22:00.347032  108740 httplog.go:90] GET /healthz: (248.849µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.375418  108740 client.go:357] parsed scheme: "endpoint"
I1017 13:22:00.375526  108740 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1017 13:22:00.436396  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.436571  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:22:00.436658  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:22:00.436770  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:22:00.436990  108740 httplog.go:90] GET /healthz: (1.979352ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:00.447969  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.448339  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:22:00.448480  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:22:00.448629  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:22:00.448972  108740 httplog.go:90] GET /healthz: (2.21709ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.536015  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.536043  108740 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1017 13:22:00.536054  108740 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1017 13:22:00.536063  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1017 13:22:00.536102  108740 httplog.go:90] GET /healthz: (1.123625ms) 0 [Go-http-client/1.1 127.0.0.1:37858]
I1017 13:22:00.536121  108740 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.96611ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.536339  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.635264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.538354  108740 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.724514ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.538643  108740 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.201884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37858]
I1017 13:22:00.538885  108740 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1017 13:22:00.539039  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.841027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37540]
I1017 13:22:00.540058  108740 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (857.113µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.540628  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.12396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37540]
I1017 13:22:00.541349  108740 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.168145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.542177  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.093308ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37540]
I1017 13:22:00.542409  108740 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.585207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.542605  108740 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1017 13:22:00.542616  108740 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1017 13:22:00.544420  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.578413ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.545611  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (684.029µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.547284  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.547305  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:00.547327  108740 httplog.go:90] GET /healthz: (578.338µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.548185  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (2.13889ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.549511  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (948.35µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.551210  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.331758ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.552675  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (798.456µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.554425  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (892.398µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.556965  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.162712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.557264  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1017 13:22:00.558350  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (717.026µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.560671  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.457459ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.561584  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1017 13:22:00.562981  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (997.388µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.564854  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.469398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.565066  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1017 13:22:00.565973  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (736.96µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.568293  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.992978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.568491  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1017 13:22:00.570979  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.287629ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.572675  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.312078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.572901  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1017 13:22:00.573933  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (856.952µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.575566  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.170257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.575923  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1017 13:22:00.576879  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (750.65µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.578411  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.17599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.578576  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1017 13:22:00.579458  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (711.03µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.581494  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.701593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.581674  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1017 13:22:00.582589  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (693.941µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.585093  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.029997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.585305  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1017 13:22:00.586523  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (819.808µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.588846  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.820908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.589117  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1017 13:22:00.590084  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (818.245µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.592094  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.419359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.592350  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1017 13:22:00.594137  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.362463ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.596090  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.628886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.596482  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1017 13:22:00.597343  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (703.76µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.598920  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.242704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.599115  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1017 13:22:00.600663  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.386079ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.602633  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.390189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.602994  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1017 13:22:00.605336  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (2.190384ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.607198  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.276012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.607371  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1017 13:22:00.608214  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (658.937µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.609827  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.214185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.610100  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1017 13:22:00.611109  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (830.543µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.612954  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.377942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.613226  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1017 13:22:00.614354  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (860.891µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.617422  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.449043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.617675  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1017 13:22:00.618671  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (691.387µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.620584  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.282109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.620939  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1017 13:22:00.621809  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (702.954µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.623546  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.407773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.623696  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1017 13:22:00.624647  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (718.899µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.626479  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.456581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.626616  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1017 13:22:00.627509  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (643.191µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.629092  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.206943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.629272  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1017 13:22:00.630053  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (654.659µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.631691  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.273724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.632005  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1017 13:22:00.633104  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (950.98µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.634609  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.135844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.634998  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1017 13:22:00.635706  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.635745  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:00.635820  108740 httplog.go:90] GET /healthz: (915.156µs) 0 [Go-http-client/1.1 127.0.0.1:37856]
I1017 13:22:00.636289  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (685.188µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.637795  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.20857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.637944  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1017 13:22:00.638902  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (749.788µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.641202  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.904973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.641457  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1017 13:22:00.642434  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (804.893µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.644326  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.41032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.644621  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1017 13:22:00.645649  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (797.759µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.647499  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.647529  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:00.647579  108740 httplog.go:90] GET /healthz: (1.047471ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.647874  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.757017ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.648138  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1017 13:22:00.649158  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (760.732µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.650584  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.073247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.650792  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1017 13:22:00.651902  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (840.464µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.653989  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.611337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.654156  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1017 13:22:00.655104  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (804.166µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.657024  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.458614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.657171  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1017 13:22:00.658298  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (932.185µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.660170  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.519737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.660425  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1017 13:22:00.661474  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (785.001µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.663120  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.242707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.663311  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1017 13:22:00.664245  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (798.688µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.666033  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.371947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.666299  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1017 13:22:00.667347  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (738.906µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.669044  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.343381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.669215  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1017 13:22:00.670241  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (863.601µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.672406  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.779986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.672593  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1017 13:22:00.673544  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (696.532µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.675132  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.141728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.675337  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1017 13:22:00.676347  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (798.642µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.678001  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.353866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.678233  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1017 13:22:00.680106  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.629169ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.682068  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.561073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.682444  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1017 13:22:00.683779  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (966.979µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.686003  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.777637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.686169  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1017 13:22:00.687087  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (701.347µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.689071  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.600853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.689436  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1017 13:22:00.690731  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.074747ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.694444  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.199451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.694673  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1017 13:22:00.695662  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (704.554µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.697625  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.461582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.697940  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1017 13:22:00.699036  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (737.323µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.701691  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.104603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.702068  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1017 13:22:00.705817  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (989.798µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.708439  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.067609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.708817  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1017 13:22:00.709953  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (769.947µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.711877  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.636742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.712067  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1017 13:22:00.713009  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (757.385µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.715159  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.802405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.715375  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1017 13:22:00.716677  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (905.034µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.718790  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.519192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.719109  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1017 13:22:00.721011  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.723288ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.723192  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.505981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.723620  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1017 13:22:00.725125  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.250002ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.727211  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.590666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.727442  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1017 13:22:00.735831  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.358562ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.736065  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.736097  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:00.736150  108740 httplog.go:90] GET /healthz: (1.335843ms) 0 [Go-http-client/1.1 127.0.0.1:37856]
I1017 13:22:00.748174  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.748325  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:00.748415  108740 httplog.go:90] GET /healthz: (1.747905ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.757089  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.658187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.757412  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1017 13:22:00.775710  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.293315ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.796340  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.916845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.796630  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1017 13:22:00.815708  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.252058ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.836186  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.836213  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:00.836883  108740 httplog.go:90] GET /healthz: (1.986001ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:00.836641  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.181251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.837147  108740 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1017 13:22:00.848335  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.848406  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:00.848588  108740 httplog.go:90] GET /healthz: (1.274528ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.856236  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.81129ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.876569  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.039768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.876851  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1017 13:22:00.895609  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.116186ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.916487  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.983221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.916884  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1017 13:22:00.935992  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.380214ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:00.936425  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.936601  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:00.936786  108740 httplog.go:90] GET /healthz: (1.911365ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:00.948420  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:00.948453  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:00.948504  108740 httplog.go:90] GET /healthz: (1.509575ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.956451  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.029346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.956703  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1017 13:22:00.976126  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.736826ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.996347  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.96522ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:00.996819  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1017 13:22:01.015822  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.338912ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.036518  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.06018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.036783  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1017 13:22:01.036992  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.037191  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.037337  108740 httplog.go:90] GET /healthz: (2.371248ms) 0 [Go-http-client/1.1 127.0.0.1:37856]
I1017 13:22:01.047988  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.048018  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.048052  108740 httplog.go:90] GET /healthz: (1.271558ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.056230  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.689384ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.077224  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.826853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.077754  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1017 13:22:01.095631  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.20188ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.123465  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.685271ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.123697  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1017 13:22:01.136097  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.136139  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.136179  108740 httplog.go:90] GET /healthz: (1.161959ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:01.136214  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.796887ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.148155  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.148198  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.148239  108740 httplog.go:90] GET /healthz: (1.433055ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.158214  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.772028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.158465  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1017 13:22:01.175829  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.335064ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.198127  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.67198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.198462  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1017 13:22:01.216012  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.507302ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.236861  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.236901  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.236947  108740 httplog.go:90] GET /healthz: (1.472863ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:01.236972  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.508773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.237208  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1017 13:22:01.247790  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.247830  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.247880  108740 httplog.go:90] GET /healthz: (1.156001ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.255849  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.41543ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.276800  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.211055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.277038  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1017 13:22:01.295551  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.103739ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.316542  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.089213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.317003  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1017 13:22:01.336071  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.336103  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.336130  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.699235ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.336136  108740 httplog.go:90] GET /healthz: (1.175037ms) 0 [Go-http-client/1.1 127.0.0.1:37856]
I1017 13:22:01.347966  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.347999  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.348042  108740 httplog.go:90] GET /healthz: (1.328949ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.356657  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.244866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.357161  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1017 13:22:01.376381  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.881329ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.396589  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.064469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.397365  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1017 13:22:01.416396  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.793324ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.437080  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.655763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:01.437315  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.437341  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.437375  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1017 13:22:01.437380  108740 httplog.go:90] GET /healthz: (2.388418ms) 0 [Go-http-client/1.1 127.0.0.1:37856]
I1017 13:22:01.447968  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.448002  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.448032  108740 httplog.go:90] GET /healthz: (1.222223ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.455500  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.15053ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.477185  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.240643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.477449  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1017 13:22:01.495669  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.284576ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.517354  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.901535ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.517642  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1017 13:22:01.536268  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.536351  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.536395  108740 httplog.go:90] GET /healthz: (1.525469ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:01.537135  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.57092ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.547980  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.548184  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.548394  108740 httplog.go:90] GET /healthz: (1.698314ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.558462  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.042994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.558698  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1017 13:22:01.575999  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.501434ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.596681  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.138002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.597060  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1017 13:22:01.616272  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.742316ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.637264  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.637611  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.637873  108740 httplog.go:90] GET /healthz: (2.82363ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:01.638193  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.607454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.638496  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1017 13:22:01.647977  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.648009  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.648037  108740 httplog.go:90] GET /healthz: (1.21859ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.656026  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.52471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.676940  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.35036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.677203  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1017 13:22:01.695544  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.000595ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.717127  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.671621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.717539  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1017 13:22:01.735980  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.736017  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.736065  108740 httplog.go:90] GET /healthz: (1.181772ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:01.736101  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.64701ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.747995  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.748046  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.748089  108740 httplog.go:90] GET /healthz: (1.2735ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.756786  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.234562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.757058  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1017 13:22:01.775922  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.473058ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.796387  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.966679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.797025  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1017 13:22:01.815398  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.009925ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.836025  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.836061  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.836114  108740 httplog.go:90] GET /healthz: (1.190563ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:01.837151  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.242029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.837433  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1017 13:22:01.847875  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.847909  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.847997  108740 httplog.go:90] GET /healthz: (1.233532ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.855346  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (980.777µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.879299  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.994831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.879555  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1017 13:22:01.895443  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.039775ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.916394  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.905818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.916705  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1017 13:22:01.935945  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.935978  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.936023  108740 httplog.go:90] GET /healthz: (1.187182ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:01.936360  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.91818ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.948170  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:01.948239  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:01.948308  108740 httplog.go:90] GET /healthz: (1.530317ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.956353  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.908001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.956586  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1017 13:22:01.975845  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (995.998µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.996354  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.891938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:01.996611  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1017 13:22:02.015864  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.304542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.036365  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.036404  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.036439  108740 httplog.go:90] GET /healthz: (1.4709ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:02.036737  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.096625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.037037  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1017 13:22:02.047971  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.048009  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.048058  108740 httplog.go:90] GET /healthz: (1.264083ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.055998  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.555742ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.076885  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.480715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.077137  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1017 13:22:02.096397  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.842219ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.116930  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.463743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.117325  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1017 13:22:02.136078  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.136106  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.136140  108740 httplog.go:90] GET /healthz: (1.25865ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:02.136903  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (2.447381ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.147922  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.147974  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.148019  108740 httplog.go:90] GET /healthz: (1.226876ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.156512  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.025419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.156941  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1017 13:22:02.175713  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.283346ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.196626  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.206548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.196924  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1017 13:22:02.215959  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.5ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.236295  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.236331  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.236371  108740 httplog.go:90] GET /healthz: (1.425787ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:02.236981  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.490658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.237251  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1017 13:22:02.248400  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.248435  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.248476  108740 httplog.go:90] GET /healthz: (1.496932ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.255926  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.508099ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.277077  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.629507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.277557  108740 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1017 13:22:02.296041  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.564064ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.297790  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.368503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.316960  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.556207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.319138  108740 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1017 13:22:02.335921  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.335956  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.335996  108740 httplog.go:90] GET /healthz: (1.064392ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:02.336033  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.542618ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.339218  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.600788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.348407  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.348458  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.348506  108740 httplog.go:90] GET /healthz: (1.726157ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.356948  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.498313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.357285  108740 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1017 13:22:02.380905  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.781003ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.392624  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (10.718351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.396380  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.863262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.396695  108740 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1017 13:22:02.416107  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.641603ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.418501  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.575434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.436493  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.436812  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.436881  108740 httplog.go:90] GET /healthz: (2.066765ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:02.438233  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.69513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.438467  108740 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1017 13:22:02.447833  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.447859  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.447889  108740 httplog.go:90] GET /healthz: (1.140334ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.455766  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.353617ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.457565  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.252815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.476866  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.291628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.477200  108740 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1017 13:22:02.495527  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.116181ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.497132  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.118914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.516248  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.796639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.516825  108740 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1017 13:22:02.535892  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.536209  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.536140  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.672975ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37856]
I1017 13:22:02.536598  108740 httplog.go:90] GET /healthz: (1.710264ms) 0 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:02.538493  108740 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.27038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.547898  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.547927  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.547962  108740 httplog.go:90] GET /healthz: (1.254519ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.556987  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.540438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.557424  108740 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1017 13:22:02.575695  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.249744ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.577673  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.227975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.596253  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.702082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.596531  108740 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1017 13:22:02.615999  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.487376ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.618798  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.238813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.635894  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.635930  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.635967  108740 httplog.go:90] GET /healthz: (1.114325ms) 0 [Go-http-client/1.1 127.0.0.1:37856]
I1017 13:22:02.636817  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.368699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.637054  108740 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1017 13:22:02.647550  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.647627  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.647679  108740 httplog.go:90] GET /healthz: (959.841µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.655542  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.166758ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.657164  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.089891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.676409  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.987414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.676686  108740 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1017 13:22:02.695477  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (956.053µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.697154  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.158622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.717003  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.519948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.717525  108740 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1017 13:22:02.736797  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (2.310498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.737872  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.737909  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.737948  108740 httplog.go:90] GET /healthz: (2.67576ms) 0 [Go-http-client/1.1 127.0.0.1:37856]
I1017 13:22:02.739932  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.581896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.747966  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.748004  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.748073  108740 httplog.go:90] GET /healthz: (1.291485ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.757703  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.335149ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.759021  108740 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1017 13:22:02.776421  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.983059ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.778316  108740 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.444939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.797685  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.197869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.798103  108740 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1017 13:22:02.816092  108740 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.558712ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.817925  108740 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.353712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.835686  108740 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1017 13:22:02.835741  108740 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1017 13:22:02.835788  108740 httplog.go:90] GET /healthz: (848.168µs) 0 [Go-http-client/1.1 127.0.0.1:37856]
I1017 13:22:02.838162  108740 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.715104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.838414  108740 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1017 13:22:02.848173  108740 httplog.go:90] GET /healthz: (1.3749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.850329  108740 httplog.go:90] GET /api/v1/namespaces/default: (1.65057ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.852670  108740 httplog.go:90] POST /api/v1/namespaces: (1.717254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.854137  108740 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.118688ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.866941  108740 httplog.go:90] POST /api/v1/namespaces/default/services: (12.392709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.868544  108740 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.203549ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.870130  108740 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.054164ms) 422 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
E1017 13:22:02.870340  108740 controller.go:227] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I1017 13:22:02.936505  108740 httplog.go:90] GET /healthz: (1.465592ms) 200 [Go-http-client/1.1 127.0.0.1:37538]
I1017 13:22:02.940433  108740 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.846506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
W1017 13:22:02.941062  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:22:02.941222  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:22:02.941290  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:22:02.941360  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:22:02.941440  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:22:02.941518  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:22:02.941580  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:22:02.941643  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:22:02.941702  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:22:02.941867  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1017 13:22:02.941940  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 13:22:02.943548  108740 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-0: (1.34874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.944010  108740 factory.go:291] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I1017 13:22:02.944063  108740 factory.go:308] Registering predicate: PredicateOne
I1017 13:22:02.944074  108740 algorithm_factory.go:288] Predicate type PredicateOne already registered, reusing.
I1017 13:22:02.944081  108740 factory.go:308] Registering predicate: PredicateTwo
I1017 13:22:02.944087  108740 algorithm_factory.go:288] Predicate type PredicateTwo already registered, reusing.
I1017 13:22:02.944124  108740 factory.go:323] Registering priority: PriorityOne
I1017 13:22:02.944133  108740 algorithm_factory.go:399] Priority type PriorityOne already registered, reusing.
I1017 13:22:02.944145  108740 factory.go:323] Registering priority: PriorityTwo
I1017 13:22:02.944150  108740 algorithm_factory.go:399] Priority type PriorityTwo already registered, reusing.
I1017 13:22:02.944155  108740 factory.go:369] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I1017 13:22:02.946609  108740 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.766696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
W1017 13:22:02.947105  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 13:22:02.948824  108740 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-1: (1.355825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.949294  108740 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1017 13:22:02.949327  108740 factory.go:300] Using predicates from algorithm provider 'DefaultProvider'
I1017 13:22:02.949341  108740 factory.go:315] Using priorities from algorithm provider 'DefaultProvider'
I1017 13:22:02.949347  108740 factory.go:369] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1017 13:22:02.951784  108740 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.726945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
W1017 13:22:02.952223  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 13:22:02.953937  108740 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-2: (1.313649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.954151  108740 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1017 13:22:02.954173  108740 factory.go:369] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I1017 13:22:02.956157  108740 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.575866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
W1017 13:22:02.956513  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 13:22:02.957912  108740 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-3: (994.606µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.958296  108740 factory.go:291] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I1017 13:22:02.958340  108740 factory.go:308] Registering predicate: PredicateOne
I1017 13:22:02.958350  108740 algorithm_factory.go:288] Predicate type PredicateOne already registered, reusing.
I1017 13:22:02.958357  108740 factory.go:308] Registering predicate: PredicateTwo
I1017 13:22:02.958363  108740 algorithm_factory.go:288] Predicate type PredicateTwo already registered, reusing.
I1017 13:22:02.958370  108740 factory.go:323] Registering priority: PriorityOne
I1017 13:22:02.958376  108740 algorithm_factory.go:399] Priority type PriorityOne already registered, reusing.
I1017 13:22:02.958384  108740 factory.go:323] Registering priority: PriorityTwo
I1017 13:22:02.958388  108740 algorithm_factory.go:399] Priority type PriorityTwo already registered, reusing.
I1017 13:22:02.958393  108740 factory.go:369] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I1017 13:22:02.960138  108740 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.33223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
W1017 13:22:02.960326  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 13:22:02.961549  108740 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-4: (970.934µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:02.961803  108740 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1017 13:22:02.961822  108740 factory.go:300] Using predicates from algorithm provider 'DefaultProvider'
I1017 13:22:02.961830  108740 factory.go:315] Using priorities from algorithm provider 'DefaultProvider'
I1017 13:22:02.961836  108740 factory.go:369] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1017 13:22:03.137594  108740 request.go:538] Throttling request took 175.390275ms, request: POST:http://127.0.0.1:38211/api/v1/namespaces/kube-system/configmaps
I1017 13:22:03.140070  108740 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.135869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
W1017 13:22:03.140399  108740 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1017 13:22:03.337609  108740 request.go:538] Throttling request took 196.970964ms, request: GET:http://127.0.0.1:38211/api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5
I1017 13:22:03.340005  108740 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5: (2.030815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:03.340642  108740 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1017 13:22:03.340677  108740 factory.go:369] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I1017 13:22:03.537560  108740 request.go:538] Throttling request took 196.385622ms, request: DELETE:http://127.0.0.1:38211/api/v1/nodes
I1017 13:22:03.539999  108740 httplog.go:90] DELETE /api/v1/nodes: (2.141417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
I1017 13:22:03.540195  108740 controller.go:185] Shutting down kubernetes service endpoint reconciler
I1017 13:22:03.541617  108740 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.177792ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37538]
--- FAIL: TestSchedulerCreationFromConfigMap (4.17s)
    scheduler_test.go:290: Expected predicates map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:297: unexpected predicates diff (-want, +got):   map[string][]config.Plugin(
        - 	nil,
        + 	{"FilterPlugin": {{Name: "TaintToleration"}}},
          )
    scheduler_test.go:290: Expected predicates map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}]
    scheduler_test.go:290: Expected predicates map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{}]
    scheduler_test.go:297: unexpected predicates diff (-want, +got):   map[string][]config.Plugin(
        - 	nil,
        + 	{"FilterPlugin": {{Name: "TaintToleration"}}},
          )
    scheduler_test.go:290: Expected predicates map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{} PredicateOne:{} PredicateTwo:{}], got map[CheckNodeUnschedulable:{} PredicateOne:{} PredicateTwo:{}]
    scheduler_test.go:297: unexpected predicates diff (-want, +got):   map[string][]config.Plugin(
        - 	nil,
        + 	{"FilterPlugin": {{Name: "TaintToleration"}}},
          )
    scheduler_test.go:290: Expected predicates map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}]
    scheduler_test.go:290: Expected predicates map[CheckNodeUnschedulable:{} PodToleratesNodeTaints:{}], got map[CheckNodeUnschedulable:{}]
    scheduler_test.go:297: unexpected predicates diff (-want, +got):   map[string][]config.Plugin(
        - 	nil,
        + 	{"FilterPlugin": {{Name: "TaintToleration"}}},
          )

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20191017-131121.xml

Filter through log files | View test history on testgrid


Show 2898 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 586 lines ...
W1017 13:05:27.780] I1017 13:05:27.779450   53264 controllermanager.go:534] Started "pvc-protection"
W1017 13:05:27.780] I1017 13:05:27.779546   53264 pvc_protection_controller.go:100] Starting PVC protection controller
W1017 13:05:27.780] I1017 13:05:27.779581   53264 shared_informer.go:197] Waiting for caches to sync for PVC protection
W1017 13:05:27.781] I1017 13:05:27.780762   53264 controllermanager.go:534] Started "replicationcontroller"
W1017 13:05:27.781] I1017 13:05:27.780858   53264 replica_set.go:182] Starting replicationcontroller controller
W1017 13:05:27.781] I1017 13:05:27.781017   53264 shared_informer.go:197] Waiting for caches to sync for ReplicationController
W1017 13:05:27.781] E1017 13:05:27.781429   53264 core.go:79] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1017 13:05:27.782] W1017 13:05:27.781456   53264 controllermanager.go:526] Skipping "service"
W1017 13:05:27.782] I1017 13:05:27.782103   53264 controllermanager.go:534] Started "clusterrole-aggregation"
W1017 13:05:27.783] W1017 13:05:27.782140   53264 controllermanager.go:513] "endpointslice" is disabled
W1017 13:05:27.783] I1017 13:05:27.782347   53264 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W1017 13:05:27.783] I1017 13:05:27.782432   53264 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator
I1017 13:05:27.883] node/127.0.0.1 created
... skipping 41 lines ...
W1017 13:05:28.403] I1017 13:05:28.401508   53264 node_lifecycle_controller.go:455] Controller will reconcile labels.
W1017 13:05:28.403] I1017 13:05:28.401571   53264 controllermanager.go:534] Started "nodelifecycle"
W1017 13:05:28.403] W1017 13:05:28.401590   53264 controllermanager.go:526] Skipping "ttl-after-finished"
W1017 13:05:28.403] I1017 13:05:28.401694   53264 node_lifecycle_controller.go:488] Starting node controller
W1017 13:05:28.403] I1017 13:05:28.401752   53264 shared_informer.go:197] Waiting for caches to sync for taint
W1017 13:05:28.404] I1017 13:05:28.403446   53264 node_lifecycle_controller.go:77] Sending events to api server
W1017 13:05:28.405] E1017 13:05:28.403529   53264 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W1017 13:05:28.405] W1017 13:05:28.403545   53264 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W1017 13:05:28.406] I1017 13:05:28.405963   53264 controllermanager.go:534] Started "disruption"
W1017 13:05:28.406] I1017 13:05:28.405989   53264 disruption.go:333] Starting disruption controller
W1017 13:05:28.406] I1017 13:05:28.406030   53264 shared_informer.go:197] Waiting for caches to sync for disruption
W1017 13:05:28.413] I1017 13:05:28.412847   53264 controllermanager.go:534] Started "csrcleaner"
W1017 13:05:28.414] I1017 13:05:28.412904   53264 cleaner.go:81] Starting CSR cleaner controller
... skipping 5 lines ...
W1017 13:05:28.416] I1017 13:05:28.415509   53264 shared_informer.go:197] Waiting for caches to sync for TTL
W1017 13:05:28.417] W1017 13:05:28.415882   53264 controllermanager.go:526] Skipping "route"
W1017 13:05:28.417] I1017 13:05:28.416852   53264 deployment_controller.go:152] Starting deployment controller
W1017 13:05:28.417] I1017 13:05:28.416883   53264 shared_informer.go:197] Waiting for caches to sync for deployment
W1017 13:05:28.417] I1017 13:05:28.416930   53264 controllermanager.go:534] Started "deployment"
W1017 13:05:28.418] W1017 13:05:28.416970   53264 controllermanager.go:526] Skipping "nodeipam"
W1017 13:05:28.427] W1017 13:05:28.426477   53264 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W1017 13:05:28.460] I1017 13:05:28.452863   53264 shared_informer.go:204] Caches are synced for PV protection 
W1017 13:05:28.461] I1017 13:05:28.457277   53264 shared_informer.go:204] Caches are synced for expand 
W1017 13:05:28.472] I1017 13:05:28.471985   53264 shared_informer.go:204] Caches are synced for certificate-csrapproving 
W1017 13:05:28.477] I1017 13:05:28.476775   53264 shared_informer.go:204] Caches are synced for attach detach 
W1017 13:05:28.480] I1017 13:05:28.480096   53264 shared_informer.go:204] Caches are synced for PVC protection 
W1017 13:05:28.481] I1017 13:05:28.481367   53264 shared_informer.go:204] Caches are synced for ReplicationController 
... skipping 32 lines ...
I1017 13:05:28.868] }+++ [1017 13:05:28] Testing kubectl version: check client only output matches expected output
W1017 13:05:28.969] I1017 13:05:28.677176   53264 shared_informer.go:204] Caches are synced for namespace 
W1017 13:05:28.972] I1017 13:05:28.750202   53264 shared_informer.go:204] Caches are synced for endpoint 
W1017 13:05:28.972] I1017 13:05:28.782708   53264 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W1017 13:05:28.972] I1017 13:05:28.787360   53264 shared_informer.go:204] Caches are synced for service account 
W1017 13:05:28.973] I1017 13:05:28.789349   49669 controller.go:606] quota admission added evaluator for: serviceaccounts
W1017 13:05:28.973] E1017 13:05:28.801036   53264 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I1017 13:05:29.074] Successful: the flag '--client' shows correct client info
I1017 13:05:29.077] (BSuccessful: the flag '--client' correctly has no server version info
I1017 13:05:29.082] (B+++ [1017 13:05:29] Testing kubectl version: verify json output
W1017 13:05:29.183] I1017 13:05:29.086096   53264 shared_informer.go:204] Caches are synced for resource quota 
W1017 13:05:29.184] I1017 13:05:29.097108   53264 shared_informer.go:204] Caches are synced for garbage collector 
W1017 13:05:29.184] I1017 13:05:29.097165   53264 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
... skipping 59 lines ...
I1017 13:05:33.040] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:05:33.044] +++ command: run_RESTMapper_evaluation_tests
I1017 13:05:33.059] +++ [1017 13:05:33] Creating namespace namespace-1571317533-25681
I1017 13:05:33.160] namespace/namespace-1571317533-25681 created
I1017 13:05:33.260] Context "test" modified.
I1017 13:05:33.271] +++ [1017 13:05:33] Testing RESTMapper
I1017 13:05:33.428] +++ [1017 13:05:33] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1017 13:05:33.457] +++ exit code: 0
I1017 13:05:33.635] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1017 13:05:33.636] bindings                                                                      true         Binding
I1017 13:05:33.636] componentstatuses                 cs                                          false        ComponentStatus
I1017 13:05:33.637] configmaps                        cm                                          true         ConfigMap
I1017 13:05:33.637] endpoints                         ep                                          true         Endpoints
... skipping 317 lines ...
I1017 13:05:52.121] (Bcore.sh:79: Successful get pods/valid-pod {{.metadata.name}}: valid-pod
I1017 13:05:52.276] (Bcore.sh:81: Successful get pods {.items[*].metadata.name}: valid-pod
I1017 13:05:52.421] (Bcore.sh:82: Successful get pod valid-pod {.metadata.name}: valid-pod
I1017 13:05:52.574] (Bcore.sh:83: Successful get pod/valid-pod {.metadata.name}: valid-pod
I1017 13:05:52.718] (Bcore.sh:84: Successful get pods/valid-pod {.metadata.name}: valid-pod
I1017 13:05:52.880] (B
I1017 13:05:52.889] core.sh:86: FAIL!
I1017 13:05:52.889] Describe pods valid-pod
I1017 13:05:52.890]   Expected Match: Name:
I1017 13:05:52.890]   Not found in:
I1017 13:05:52.890] Name:         valid-pod
I1017 13:05:52.891] Namespace:    namespace-1571317550-8875
I1017 13:05:52.891] Priority:     0
... skipping 108 lines ...
I1017 13:05:53.452] QoS Class:        Guaranteed
I1017 13:05:53.452] Node-Selectors:   <none>
I1017 13:05:53.453] Tolerations:      <none>
I1017 13:05:53.453] Events:           <none>
I1017 13:05:53.453] (B
I1017 13:05:53.615] 
I1017 13:05:53.615] FAIL!
I1017 13:05:53.615] Describe pods
I1017 13:05:53.616]   Expected Match: Name:
I1017 13:05:53.616]   Not found in:
I1017 13:05:53.616] Name:         valid-pod
I1017 13:05:53.616] Namespace:    namespace-1571317550-8875
I1017 13:05:53.616] Priority:     0
... skipping 158 lines ...
I1017 13:06:01.008] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:06:01.290] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:06:01.460] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:06:01.734] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:06:01.881] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:06:02.017] (Bpod "valid-pod" force deleted
W1017 13:06:02.117] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1017 13:06:02.118] error: setting 'all' parameter but found a non empty selector. 
W1017 13:06:02.118] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1017 13:06:02.219] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:06:02.262] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I1017 13:06:02.353] (Bnamespace/test-kubectl-describe-pod created
I1017 13:06:02.471] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I1017 13:06:02.590] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I1017 13:06:03.815] (Bpoddisruptionbudget.policy/test-pdb-3 created
I1017 13:06:03.947] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1017 13:06:04.051] (Bpoddisruptionbudget.policy/test-pdb-4 created
I1017 13:06:04.185] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1017 13:06:04.412] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:06:04.678] (Bpod/env-test-pod created
W1017 13:06:04.779] error: min-available and max-unavailable cannot be both specified
I1017 13:06:04.880] 
I1017 13:06:04.881] core.sh:264: FAIL!
I1017 13:06:04.881] Describe pods --namespace=test-kubectl-describe-pod env-test-pod
I1017 13:06:04.881]   Expected Match: TEST_CMD_1
I1017 13:06:04.881]   Not found in:
I1017 13:06:04.881] Name:         env-test-pod
I1017 13:06:04.881] Namespace:    test-kubectl-describe-pod
I1017 13:06:04.881] Priority:     0
... skipping 23 lines ...
I1017 13:06:04.884] Tolerations:       <none>
I1017 13:06:04.884] Events:            <none>
I1017 13:06:04.884] (B
I1017 13:06:04.884] 264 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
I1017 13:06:04.884] (B
I1017 13:06:04.990] 
I1017 13:06:04.990] FAIL!
I1017 13:06:04.990] Describe pods --namespace=test-kubectl-describe-pod
I1017 13:06:04.991]   Expected Match: TEST_CMD_1
I1017 13:06:04.991]   Not found in:
I1017 13:06:04.991] Name:         env-test-pod
I1017 13:06:04.991] Namespace:    test-kubectl-describe-pod
I1017 13:06:04.991] Priority:     0
... skipping 35 lines ...
I1017 13:06:05.626] namespace "test-kubectl-describe-pod" deleted
I1017 13:06:10.808] +++ [1017 13:06:10] Creating namespace namespace-1571317570-1922
I1017 13:06:10.903] namespace/namespace-1571317570-1922 created
I1017 13:06:10.994] Context "test" modified.
I1017 13:06:11.123] core.sh:278: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:06:11.363] (Bpod/valid-pod created
W1017 13:06:11.597] error: the path "test/e2e/testing-manifests/kubectl/redis-master-pod.yaml" does not exist
I1017 13:06:11.735] Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: redis-master:valid-pod:, got: valid-pod:
I1017 13:06:11.737] 
I1017 13:06:11.744] core.sh:283: FAIL!
I1017 13:06:11.744] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I1017 13:06:11.745]   Expected: redis-master:valid-pod:
I1017 13:06:11.745]   Got:      valid-pod:
I1017 13:06:11.745] (B
I1017 13:06:11.745] 53 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I1017 13:06:11.745] (B
I1017 13:06:11.859] Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: redis-master:valid-pod:, got: valid-pod:
I1017 13:06:11.861] 
I1017 13:06:11.867] core.sh:287: FAIL!
I1017 13:06:11.868] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I1017 13:06:11.868]   Expected: redis-master:valid-pod:
I1017 13:06:11.868]   Got:      valid-pod:
I1017 13:06:11.868] (B
I1017 13:06:11.868] 53 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I1017 13:06:11.868] (B
W1017 13:06:11.969] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1017 13:06:11.978] Error from server (NotFound): pods "redis-master" not found
I1017 13:06:12.079] pod "valid-pod" force deleted
I1017 13:06:12.110] core.sh:291: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:06:12.116] (B+++ [1017 13:06:12] Creating namespace namespace-1571317572-7697
I1017 13:06:12.213] namespace/namespace-1571317572-7697 created
I1017 13:06:12.307] Context "test" modified.
I1017 13:06:12.424] core.sh:296: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 98 lines ...
I1017 13:06:20.836] (Bpod/valid-pod patched
I1017 13:06:20.969] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1017 13:06:21.069] (Bpod/valid-pod patched
I1017 13:06:21.204] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1017 13:06:21.434] (Bpod/valid-pod patched
I1017 13:06:21.566] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1017 13:06:21.803] (B+++ [1017 13:06:21] "kubectl patch with resourceVersion 505" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I1017 13:06:22.160] pod "valid-pod" deleted
I1017 13:06:22.174] pod/valid-pod replaced
I1017 13:06:22.327] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1017 13:06:22.612] (BSuccessful
I1017 13:06:22.612] message:error: --grace-period must have --force specified
I1017 13:06:22.612] has:\-\-grace-period must have \-\-force specified
I1017 13:06:22.808] Successful
I1017 13:06:22.809] message:error: --timeout must have --force specified
I1017 13:06:22.809] has:\-\-timeout must have \-\-force specified
I1017 13:06:23.037] node/node-v1-test created
W1017 13:06:23.138] W1017 13:06:23.036354   53264 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1017 13:06:23.259] node/node-v1-test replaced
I1017 13:06:23.396] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1017 13:06:23.499] (Bnode "node-v1-test" deleted
I1017 13:06:23.629] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1017 13:06:23.997] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I1017 13:06:25.422] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 25 lines ...
I1017 13:06:25.729]     name: kubernetes-pause
I1017 13:06:25.729] has:localonlyvalue
I1017 13:06:25.776] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I1017 13:06:26.026] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I1017 13:06:26.160] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I1017 13:06:26.292] (Bpod/valid-pod labeled
W1017 13:06:26.393] error: 'name' already has a value (valid-pod), and --overwrite is false
I1017 13:06:26.494] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I1017 13:06:26.544] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:06:26.659] (Bpod "valid-pod" force deleted
W1017 13:06:26.760] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1017 13:06:26.860] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:06:26.861] (B+++ [1017 13:06:26] Creating namespace namespace-1571317586-10492
... skipping 82 lines ...
I1017 13:06:36.404] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1017 13:06:36.408] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:06:36.413] +++ command: run_kubectl_create_error_tests
I1017 13:06:36.432] +++ [1017 13:06:36] Creating namespace namespace-1571317596-25654
I1017 13:06:36.533] namespace/namespace-1571317596-25654 created
I1017 13:06:36.629] Context "test" modified.
I1017 13:06:36.639] +++ [1017 13:06:36] Testing kubectl create with error
W1017 13:06:36.739] Error: must specify one of -f and -k
W1017 13:06:36.740] 
W1017 13:06:36.740] Create a resource from a file or from stdin.
W1017 13:06:36.740] 
W1017 13:06:36.740]  JSON and YAML formats are accepted.
W1017 13:06:36.740] 
W1017 13:06:36.740] Examples:
... skipping 41 lines ...
W1017 13:06:36.745] 
W1017 13:06:36.745] Usage:
W1017 13:06:36.745]   kubectl create -f FILENAME [options]
W1017 13:06:36.746] 
W1017 13:06:36.746] Use "kubectl <command> --help" for more information about a given command.
W1017 13:06:36.746] Use "kubectl options" for a list of global command-line options (applies to all commands).
I1017 13:06:36.946] +++ [1017 13:06:36] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1017 13:06:37.048] kubectl convert is DEPRECATED and will be removed in a future version.
W1017 13:06:37.049] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1017 13:06:37.180] +++ exit code: 0
I1017 13:06:37.225] Recording: run_kubectl_apply_tests
I1017 13:06:37.225] Running command: run_kubectl_apply_tests
I1017 13:06:37.257] 
... skipping 17 lines ...
I1017 13:06:39.467] (Bpod "test-pod" deleted
I1017 13:06:39.748] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W1017 13:06:40.174] I1017 13:06:40.173759   49669 client.go:357] parsed scheme: "endpoint"
W1017 13:06:40.175] I1017 13:06:40.173820   49669 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1017 13:06:40.179] I1017 13:06:40.179150   49669 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I1017 13:06:40.280] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W1017 13:06:40.381] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1017 13:06:40.482] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1017 13:06:40.482] +++ exit code: 0
I1017 13:06:40.511] Recording: run_kubectl_run_tests
I1017 13:06:40.512] Running command: run_kubectl_run_tests
I1017 13:06:40.540] 
I1017 13:06:40.557] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 7 lines ...
I1017 13:06:41.051] (Bjob.batch/pi created
W1017 13:06:41.152] kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1017 13:06:41.152] I1017 13:06:41.038313   49669 controller.go:606] quota admission added evaluator for: jobs.batch
W1017 13:06:41.153] I1017 13:06:41.056250   53264 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571317600-15653", Name:"pi", UID:"ff3284a1-bc95-471b-b04c-da85efabc35d", APIVersion:"batch/v1", ResourceVersion:"518", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-5dcbb
I1017 13:06:41.253] run.sh:33: Successful get jobs {{range.items}}{{.metadata.name}}:{{end}}: pi:
I1017 13:06:41.321] (B
I1017 13:06:41.321] FAIL!
I1017 13:06:41.321] Describe pods
I1017 13:06:41.321]   Expected Match: Name:
I1017 13:06:41.321]   Not found in:
I1017 13:06:41.322] Name:           pi-5dcbb
I1017 13:06:41.322] Namespace:      namespace-1571317600-15653
I1017 13:06:41.322] Priority:       0
... skipping 83 lines ...
I1017 13:06:43.763] Context "test" modified.
I1017 13:06:43.773] +++ [1017 13:06:43] Testing kubectl create filter
I1017 13:06:43.886] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:06:44.129] (Bpod/selector-test-pod created
I1017 13:06:44.265] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1017 13:06:44.395] (BSuccessful
I1017 13:06:44.395] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1017 13:06:44.396] has:pods "selector-test-pod-dont-apply" not found
I1017 13:06:44.493] pod "selector-test-pod" deleted
I1017 13:06:44.522] +++ exit code: 0
I1017 13:06:44.568] Recording: run_kubectl_apply_deployments_tests
I1017 13:06:44.569] Running command: run_kubectl_apply_deployments_tests
I1017 13:06:44.602] 
... skipping 29 lines ...
W1017 13:06:47.562] I1017 13:06:47.464895   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317604-22361", Name:"nginx", UID:"0fcf8faa-5376-4433-9543-ad2e71836cf2", APIVersion:"apps/v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W1017 13:06:47.563] I1017 13:06:47.468779   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317604-22361", Name:"nginx-8484dd655", UID:"426805a3-d479-468c-9bf6-e142661cff8f", APIVersion:"apps/v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-8gxgq
W1017 13:06:47.563] I1017 13:06:47.471738   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317604-22361", Name:"nginx-8484dd655", UID:"426805a3-d479-468c-9bf6-e142661cff8f", APIVersion:"apps/v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-9gc5l
W1017 13:06:47.564] I1017 13:06:47.472422   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317604-22361", Name:"nginx-8484dd655", UID:"426805a3-d479-468c-9bf6-e142661cff8f", APIVersion:"apps/v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-6mqhj
I1017 13:06:47.664] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I1017 13:06:51.897] (BSuccessful
I1017 13:06:51.897] message:Error from server (Conflict): error when applying patch:
I1017 13:06:51.900] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1571317604-22361\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1017 13:06:51.900] to:
I1017 13:06:51.900] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I1017 13:06:51.901] Name: "nginx", Namespace: "namespace-1571317604-22361"
I1017 13:06:51.903] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1571317604-22361\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-10-17T13:06:47Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1571317604-22361" "resourceVersion":"604" "selfLink":"/apis/apps/v1/namespaces/namespace-1571317604-22361/deployments/nginx" "uid":"0fcf8faa-5376-4433-9543-ad2e71836cf2"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-10-17T13:06:47Z" "lastUpdateTime":"2019-10-17T13:06:47Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-10-17T13:06:47Z" "lastUpdateTime":"2019-10-17T13:06:47Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I1017 13:06:51.903] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I1017 13:06:51.904] has:Error from server (Conflict)
W1017 13:06:52.004] I1017 13:06:50.566288   53264 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1571317592-10354
I1017 13:06:57.202] deployment.apps/nginx configured
W1017 13:06:57.303] I1017 13:06:57.206177   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317604-22361", Name:"nginx", UID:"55add469-21de-4745-8558-d125e8eb50a5", APIVersion:"apps/v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
W1017 13:06:57.304] I1017 13:06:57.210128   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317604-22361", Name:"nginx-668b6c7744", UID:"3032ffe8-85bb-4833-8c3c-944335d6ebe2", APIVersion:"apps/v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-dwj7d
W1017 13:06:57.304] I1017 13:06:57.215165   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317604-22361", Name:"nginx-668b6c7744", UID:"3032ffe8-85bb-4833-8c3c-944335d6ebe2", APIVersion:"apps/v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-8zzd5
W1017 13:06:57.305] I1017 13:06:57.216709   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317604-22361", Name:"nginx-668b6c7744", UID:"3032ffe8-85bb-4833-8c3c-944335d6ebe2", APIVersion:"apps/v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-l69b7
... skipping 142 lines ...
I1017 13:07:05.359] +++ [1017 13:07:05] Creating namespace namespace-1571317625-5013
I1017 13:07:05.468] namespace/namespace-1571317625-5013 created
I1017 13:07:05.573] Context "test" modified.
I1017 13:07:05.582] +++ [1017 13:07:05] Testing kubectl get
I1017 13:07:05.713] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:07:05.846] (BSuccessful
I1017 13:07:05.847] message:Error from server (NotFound): pods "abc" not found
I1017 13:07:05.847] has:pods "abc" not found
I1017 13:07:05.991] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:07:06.139] (BSuccessful
I1017 13:07:06.139] message:Error from server (NotFound): pods "abc" not found
I1017 13:07:06.139] has:pods "abc" not found
I1017 13:07:06.277] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:07:06.394] (BSuccessful
I1017 13:07:06.395] message:{
I1017 13:07:06.395]     "apiVersion": "v1",
I1017 13:07:06.395]     "items": [],
... skipping 23 lines ...
I1017 13:07:06.868] has not:No resources found
I1017 13:07:06.975] Successful
I1017 13:07:06.975] message:NAME
I1017 13:07:06.975] has not:No resources found
I1017 13:07:07.082] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:07:07.219] (BSuccessful
I1017 13:07:07.220] message:error: the server doesn't have a resource type "foobar"
I1017 13:07:07.220] has not:No resources found
I1017 13:07:07.324] Successful
I1017 13:07:07.325] message:No resources found in namespace-1571317625-5013 namespace.
I1017 13:07:07.325] has:No resources found
I1017 13:07:07.427] Successful
I1017 13:07:07.428] message:
I1017 13:07:07.428] has not:No resources found
I1017 13:07:07.525] Successful
I1017 13:07:07.525] message:No resources found in namespace-1571317625-5013 namespace.
I1017 13:07:07.525] has:No resources found
I1017 13:07:07.635] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:07:07.738] (BSuccessful
I1017 13:07:07.739] message:Error from server (NotFound): pods "abc" not found
I1017 13:07:07.740] has:pods "abc" not found
I1017 13:07:07.740] FAIL!
I1017 13:07:07.741] message:Error from server (NotFound): pods "abc" not found
I1017 13:07:07.741] has not:List
I1017 13:07:07.741] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1017 13:07:07.879] Successful
I1017 13:07:07.879] message:I1017 13:07:07.822642   63047 loader.go:375] Config loaded from file:  /tmp/tmp.Tf9UxSIjmH/.kube/config
I1017 13:07:07.880] I1017 13:07:07.824486   63047 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I1017 13:07:07.880] I1017 13:07:07.850372   63047 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I1017 13:07:13.689] Successful
I1017 13:07:13.689] message:NAME    DATA   AGE
I1017 13:07:13.689] one     0      0s
I1017 13:07:13.690] three   0      0s
I1017 13:07:13.690] two     0      0s
I1017 13:07:13.690] STATUS    REASON          MESSAGE
I1017 13:07:13.690] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 13:07:13.690] has not:watch is only supported on individual resources
I1017 13:07:14.815] Successful
I1017 13:07:14.816] message:STATUS    REASON          MESSAGE
I1017 13:07:14.816] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 13:07:14.816] has not:watch is only supported on individual resources
I1017 13:07:14.827] +++ [1017 13:07:14] Creating namespace namespace-1571317634-16453
I1017 13:07:14.932] namespace/namespace-1571317634-16453 created
I1017 13:07:15.038] Context "test" modified.
I1017 13:07:15.166] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:07:15.380] (Bpod/valid-pod created
... skipping 56 lines ...
I1017 13:07:15.510] }
I1017 13:07:15.632] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:07:15.992] (B<no value>Successful
I1017 13:07:15.992] message:valid-pod:
I1017 13:07:15.992] has:valid-pod:
I1017 13:07:16.106] Successful
I1017 13:07:16.106] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1017 13:07:16.106] 	template was:
I1017 13:07:16.106] 		{.missing}
I1017 13:07:16.107] 	object given to jsonpath engine was:
I1017 13:07:16.108] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-10-17T13:07:15Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1571317634-16453", "resourceVersion":"710", "selfLink":"/api/v1/namespaces/namespace-1571317634-16453/pods/valid-pod", "uid":"4ef0f4b6-2fd5-42bd-aa90-b5734bc76deb"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I1017 13:07:16.108] has:missing is not found
W1017 13:07:16.215] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I1017 13:07:16.316] Successful
I1017 13:07:16.316] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1017 13:07:16.316] 	template was:
I1017 13:07:16.317] 		{{.missing}}
I1017 13:07:16.317] 	raw data was:
I1017 13:07:16.318] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-10-17T13:07:15Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1571317634-16453","resourceVersion":"710","selfLink":"/api/v1/namespaces/namespace-1571317634-16453/pods/valid-pod","uid":"4ef0f4b6-2fd5-42bd-aa90-b5734bc76deb"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1017 13:07:16.318] 	object given to template engine was:
I1017 13:07:16.319] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-10-17T13:07:15Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1571317634-16453 resourceVersion:710 selfLink:/api/v1/namespaces/namespace-1571317634-16453/pods/valid-pod uid:4ef0f4b6-2fd5-42bd-aa90-b5734bc76deb] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I1017 13:07:16.320] has:map has no entry for key "missing"
I1017 13:07:17.351] Successful
I1017 13:07:17.352] message:NAME        READY   STATUS    RESTARTS   AGE
I1017 13:07:17.352] valid-pod   0/1     Pending   0          1s
I1017 13:07:17.352] STATUS      REASON          MESSAGE
I1017 13:07:17.353] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 13:07:17.353] has:STATUS
I1017 13:07:17.354] Successful
I1017 13:07:17.354] message:NAME        READY   STATUS    RESTARTS   AGE
I1017 13:07:17.354] valid-pod   0/1     Pending   0          1s
I1017 13:07:17.355] STATUS      REASON          MESSAGE
I1017 13:07:17.355] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 13:07:17.355] has:valid-pod
I1017 13:07:18.468] Successful
I1017 13:07:18.468] message:pod/valid-pod
I1017 13:07:18.468] has not:STATUS
I1017 13:07:18.472] Successful
I1017 13:07:18.472] message:pod/valid-pod
... skipping 72 lines ...
I1017 13:07:19.587] status:
I1017 13:07:19.587]   phase: Pending
I1017 13:07:19.587]   qosClass: Guaranteed
I1017 13:07:19.587] ---
I1017 13:07:19.587] has:name: valid-pod
I1017 13:07:19.682] Successful
I1017 13:07:19.683] message:Error from server (NotFound): pods "invalid-pod" not found
I1017 13:07:19.683] has:"invalid-pod" not found
I1017 13:07:19.779] pod "valid-pod" deleted
I1017 13:07:19.905] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:07:20.118] (Bpod/redis-master created
I1017 13:07:20.122] pod/valid-pod created
I1017 13:07:20.241] Successful
... skipping 35 lines ...
I1017 13:07:21.848] +++ command: run_kubectl_exec_pod_tests
I1017 13:07:21.867] +++ [1017 13:07:21] Creating namespace namespace-1571317641-29520
I1017 13:07:21.968] namespace/namespace-1571317641-29520 created
I1017 13:07:22.054] Context "test" modified.
I1017 13:07:22.063] +++ [1017 13:07:22] Testing kubectl exec POD COMMAND
I1017 13:07:22.181] Successful
I1017 13:07:22.182] message:Error from server (NotFound): pods "abc" not found
I1017 13:07:22.182] has:pods "abc" not found
I1017 13:07:22.400] pod/test-pod created
I1017 13:07:22.527] Successful
I1017 13:07:22.528] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1017 13:07:22.528] has not:pods "test-pod" not found
I1017 13:07:22.529] Successful
I1017 13:07:22.529] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1017 13:07:22.530] has not:pod or type/name must be specified
I1017 13:07:22.624] pod "test-pod" deleted
I1017 13:07:22.646] +++ exit code: 0
I1017 13:07:22.693] Recording: run_kubectl_exec_resource_name_tests
I1017 13:07:22.693] Running command: run_kubectl_exec_resource_name_tests
I1017 13:07:22.727] 
... skipping 2 lines ...
I1017 13:07:22.736] +++ command: run_kubectl_exec_resource_name_tests
I1017 13:07:22.754] +++ [1017 13:07:22] Creating namespace namespace-1571317642-28403
I1017 13:07:22.849] namespace/namespace-1571317642-28403 created
I1017 13:07:22.945] Context "test" modified.
I1017 13:07:22.956] +++ [1017 13:07:22] Testing kubectl exec TYPE/NAME COMMAND
I1017 13:07:23.079] Successful
I1017 13:07:23.079] message:error: the server doesn't have a resource type "foo"
I1017 13:07:23.080] has:error:
I1017 13:07:23.185] Successful
I1017 13:07:23.185] message:Error from server (NotFound): deployments.apps "bar" not found
I1017 13:07:23.185] has:"bar" not found
I1017 13:07:23.378] pod/test-pod created
I1017 13:07:23.600] replicaset.apps/frontend created
W1017 13:07:23.701] I1017 13:07:23.605073   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317642-28403", Name:"frontend", UID:"272b8978-f461-4cd2-86c1-aa8b822f2097", APIVersion:"apps/v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6tfnc
W1017 13:07:23.702] I1017 13:07:23.607126   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317642-28403", Name:"frontend", UID:"272b8978-f461-4cd2-86c1-aa8b822f2097", APIVersion:"apps/v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9w4j4
W1017 13:07:23.702] I1017 13:07:23.608451   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317642-28403", Name:"frontend", UID:"272b8978-f461-4cd2-86c1-aa8b822f2097", APIVersion:"apps/v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2pt2k
I1017 13:07:23.819] configmap/test-set-env-config created
I1017 13:07:23.939] Successful
I1017 13:07:23.940] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I1017 13:07:23.940] has:not implemented
I1017 13:07:24.051] Successful
I1017 13:07:24.051] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1017 13:07:24.051] has not:not found
I1017 13:07:24.053] Successful
I1017 13:07:24.053] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1017 13:07:24.054] has not:pod or type/name must be specified
I1017 13:07:24.175] Successful
I1017 13:07:24.176] message:Error from server (BadRequest): pod frontend-2pt2k does not have a host assigned
I1017 13:07:24.177] has not:not found
I1017 13:07:24.179] Successful
I1017 13:07:24.179] message:Error from server (BadRequest): pod frontend-2pt2k does not have a host assigned
I1017 13:07:24.179] has not:pod or type/name must be specified
I1017 13:07:24.279] pod "test-pod" deleted
I1017 13:07:24.392] replicaset.apps "frontend" deleted
I1017 13:07:24.494] configmap "test-set-env-config" deleted
I1017 13:07:24.517] +++ exit code: 0
I1017 13:07:24.559] Recording: run_create_secret_tests
I1017 13:07:24.560] Running command: run_create_secret_tests
I1017 13:07:24.591] 
I1017 13:07:24.594] +++ Running case: test-cmd.run_create_secret_tests 
I1017 13:07:24.597] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:07:24.600] +++ command: run_create_secret_tests
I1017 13:07:24.716] Successful
I1017 13:07:24.716] message:Error from server (NotFound): secrets "mysecret" not found
I1017 13:07:24.717] has:secrets "mysecret" not found
I1017 13:07:24.948] Successful
I1017 13:07:24.949] message:Error from server (NotFound): secrets "mysecret" not found
I1017 13:07:24.949] has:secrets "mysecret" not found
I1017 13:07:24.952] Successful
I1017 13:07:24.952] message:user-specified
I1017 13:07:24.952] has:user-specified
I1017 13:07:25.055] Successful
I1017 13:07:25.155] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"5ec2413e-5f72-48cb-9ced-0f5f5a029bf9","resourceVersion":"785","creationTimestamp":"2019-10-17T13:07:25Z"}}
... skipping 2 lines ...
I1017 13:07:25.406] has:uid
I1017 13:07:25.510] Successful
I1017 13:07:25.510] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"5ec2413e-5f72-48cb-9ced-0f5f5a029bf9","resourceVersion":"786","creationTimestamp":"2019-10-17T13:07:25Z"},"data":{"key1":"config1"}}
I1017 13:07:25.510] has:config1
I1017 13:07:25.619] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"5ec2413e-5f72-48cb-9ced-0f5f5a029bf9"}}
I1017 13:07:25.750] Successful
I1017 13:07:25.751] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I1017 13:07:25.751] has:configmaps "tester-update-cm" not found
I1017 13:07:25.774] +++ exit code: 0
I1017 13:07:25.834] Recording: run_kubectl_create_kustomization_directory_tests
I1017 13:07:25.834] Running command: run_kubectl_create_kustomization_directory_tests
I1017 13:07:25.872] 
I1017 13:07:25.876] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
I1017 13:07:29.309] valid-pod   0/1     Pending   0          1s
I1017 13:07:29.309] has:valid-pod
I1017 13:07:30.431] Successful
I1017 13:07:30.431] message:NAME        READY   STATUS    RESTARTS   AGE
I1017 13:07:30.431] valid-pod   0/1     Pending   0          1s
I1017 13:07:30.431] STATUS      REASON          MESSAGE
I1017 13:07:30.431] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1017 13:07:30.432] has:Timeout exceeded while reading body
I1017 13:07:30.556] Successful
I1017 13:07:30.557] message:NAME        READY   STATUS    RESTARTS   AGE
I1017 13:07:30.557] valid-pod   0/1     Pending   0          2s
I1017 13:07:30.558] has:valid-pod
I1017 13:07:30.659] Successful
I1017 13:07:30.660] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1017 13:07:30.660] has:Invalid timeout value
I1017 13:07:30.753] pod "valid-pod" deleted
I1017 13:07:30.777] +++ exit code: 0
I1017 13:07:30.812] Recording: run_crd_tests
I1017 13:07:30.812] Running command: run_crd_tests
I1017 13:07:30.838] 
... skipping 158 lines ...
I1017 13:07:36.851] foo.company.com/test patched
I1017 13:07:36.974] crd.sh:236: Successful get foos/test {{.patched}}: value1
I1017 13:07:37.076] (Bfoo.company.com/test patched
I1017 13:07:37.185] crd.sh:238: Successful get foos/test {{.patched}}: value2
I1017 13:07:37.292] (Bfoo.company.com/test patched
I1017 13:07:37.413] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I1017 13:07:37.621] (B+++ [1017 13:07:37] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1017 13:07:37.704] {
I1017 13:07:37.705]     "apiVersion": "company.com/v1",
I1017 13:07:37.705]     "kind": "Foo",
I1017 13:07:37.705]     "metadata": {
I1017 13:07:37.705]         "annotations": {
I1017 13:07:37.705]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 191 lines ...
I1017 13:08:09.341] (Bnamespace/non-native-resources created
I1017 13:08:09.554] bar.company.com/test created
I1017 13:08:09.682] crd.sh:455: Successful get bars {{len .items}}: 1
I1017 13:08:09.789] (Bnamespace "non-native-resources" deleted
I1017 13:08:15.125] crd.sh:458: Successful get bars {{len .items}}: 0
I1017 13:08:15.367] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W1017 13:08:15.468] Error from server (NotFound): namespaces "non-native-resources" not found
I1017 13:08:15.569] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I1017 13:08:15.668] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1017 13:08:15.843] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I1017 13:08:15.917] +++ exit code: 0
I1017 13:08:15.985] Recording: run_cmd_with_img_tests
I1017 13:08:15.985] Running command: run_cmd_with_img_tests
... skipping 4 lines ...
I1017 13:08:16.057] +++ [1017 13:08:16] Creating namespace namespace-1571317696-22788
I1017 13:08:16.142] namespace/namespace-1571317696-22788 created
I1017 13:08:16.255] Context "test" modified.
I1017 13:08:16.280] +++ [1017 13:08:16] Testing cmd with image
W1017 13:08:16.381] W1017 13:08:16.370204   49669 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1017 13:08:16.381] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1017 13:08:16.382] E1017 13:08:16.371815   53264 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:16.395] I1017 13:08:16.394231   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317696-22788", Name:"test1", UID:"095e5bf9-4a64-415d-a556-0abd8d94a775", APIVersion:"apps/v1", ResourceVersion:"949", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-6cdffdb5b8 to 1
W1017 13:08:16.402] I1017 13:08:16.402006   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317696-22788", Name:"test1-6cdffdb5b8", UID:"d14ee739-dffc-4007-b96e-5d527c1cd2ed", APIVersion:"apps/v1", ResourceVersion:"950", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-kc55w
I1017 13:08:16.504] Successful
I1017 13:08:16.504] message:deployment.apps/test1 created
I1017 13:08:16.504] has:deployment.apps/test1 created
I1017 13:08:16.523] deployment.apps "test1" deleted
W1017 13:08:16.623] W1017 13:08:16.516260   49669 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1017 13:08:16.625] E1017 13:08:16.520262   53264 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:16.683] W1017 13:08:16.682965   49669 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1017 13:08:16.685] E1017 13:08:16.684429   53264 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:16.785] Successful
I1017 13:08:16.786] message:error: Invalid image name "InvalidImageName": invalid reference format
I1017 13:08:16.786] has:error: Invalid image name "InvalidImageName": invalid reference format
I1017 13:08:16.787] +++ exit code: 0
I1017 13:08:16.787] +++ [1017 13:08:16] Testing recursive resources
I1017 13:08:16.787] +++ [1017 13:08:16] Creating namespace namespace-1571317696-23000
I1017 13:08:16.793] namespace/namespace-1571317696-23000 created
I1017 13:08:16.883] Context "test" modified.
W1017 13:08:16.984] W1017 13:08:16.862819   49669 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1017 13:08:16.985] E1017 13:08:16.864059   53264 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:17.085] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:08:17.385] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:17.389] (BSuccessful
I1017 13:08:17.389] message:pod/busybox0 created
I1017 13:08:17.389] pod/busybox1 created
I1017 13:08:17.389] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1017 13:08:17.390] has:error validating data: kind not set
W1017 13:08:17.490] E1017 13:08:17.373703   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:17.522] E1017 13:08:17.522303   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:17.623] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:17.745] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1017 13:08:17.748] (BSuccessful
I1017 13:08:17.749] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:08:17.749] has:Object 'Kind' is missing
W1017 13:08:17.850] E1017 13:08:17.685878   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:17.868] E1017 13:08:17.867521   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:17.968] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:18.242] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1017 13:08:18.245] (BSuccessful
I1017 13:08:18.245] message:pod/busybox0 replaced
I1017 13:08:18.246] pod/busybox1 replaced
I1017 13:08:18.246] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1017 13:08:18.247] has:error validating data: kind not set
I1017 13:08:18.366] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:18.502] (BSuccessful
I1017 13:08:18.503] message:Name:         busybox0
I1017 13:08:18.503] Namespace:    namespace-1571317696-23000
I1017 13:08:18.504] Priority:     0
I1017 13:08:18.504] Node:         <none>
... skipping 159 lines ...
I1017 13:08:18.520] has:Object 'Kind' is missing
I1017 13:08:18.625] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:18.863] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1017 13:08:18.869] (BSuccessful
I1017 13:08:18.872] message:pod/busybox0 annotated
I1017 13:08:18.872] pod/busybox1 annotated
I1017 13:08:18.873] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:08:18.873] has:Object 'Kind' is missing
W1017 13:08:18.974] E1017 13:08:18.375198   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:18.974] E1017 13:08:18.524299   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:18.975] E1017 13:08:18.688303   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:18.975] E1017 13:08:18.869800   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:19.076] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:19.426] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1017 13:08:19.431] (BSuccessful
I1017 13:08:19.432] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1017 13:08:19.432] pod/busybox0 configured
I1017 13:08:19.433] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1017 13:08:19.433] pod/busybox1 configured
I1017 13:08:19.433] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1017 13:08:19.434] has:error validating data: kind not set
W1017 13:08:19.534] E1017 13:08:19.376913   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:19.535] E1017 13:08:19.526867   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:19.636] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:08:19.765] (Bdeployment.apps/nginx created
W1017 13:08:19.866] E1017 13:08:19.690305   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:19.867] I1017 13:08:19.769883   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317696-23000", Name:"nginx", UID:"9be761c6-c5fd-41cf-bdeb-594782c8254b", APIVersion:"apps/v1", ResourceVersion:"974", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W1017 13:08:19.867] I1017 13:08:19.773911   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317696-23000", Name:"nginx-f87d999f7", UID:"c04812cf-9329-4f52-b312-9dc86680c5e3", APIVersion:"apps/v1", ResourceVersion:"975", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-t429b
W1017 13:08:19.868] I1017 13:08:19.777423   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317696-23000", Name:"nginx-f87d999f7", UID:"c04812cf-9329-4f52-b312-9dc86680c5e3", APIVersion:"apps/v1", ResourceVersion:"975", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-47c5x
W1017 13:08:19.868] I1017 13:08:19.794962   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317696-23000", Name:"nginx-f87d999f7", UID:"c04812cf-9329-4f52-b312-9dc86680c5e3", APIVersion:"apps/v1", ResourceVersion:"975", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-sn9vn
W1017 13:08:19.871] E1017 13:08:19.871226   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:19.972] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1017 13:08:20.033] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:08:20.243] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I1017 13:08:20.246] (BSuccessful
I1017 13:08:20.247] message:apiVersion: extensions/v1beta1
I1017 13:08:20.247] kind: Deployment
... skipping 37 lines ...
I1017 13:08:20.252]       terminationGracePeriodSeconds: 30
I1017 13:08:20.252] status: {}
I1017 13:08:20.253] has:extensions/v1beta1
W1017 13:08:20.353] I1017 13:08:19.977298   53264 namespace_controller.go:185] Namespace has been deleted non-native-resources
W1017 13:08:20.353] kubectl convert is DEPRECATED and will be removed in a future version.
W1017 13:08:20.354] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W1017 13:08:20.379] E1017 13:08:20.378656   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:20.480] deployment.apps "nginx" deleted
I1017 13:08:20.483] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:20.698] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:20.702] (BSuccessful
I1017 13:08:20.702] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1017 13:08:20.703] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1017 13:08:20.703] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:08:20.703] has:Object 'Kind' is missing
W1017 13:08:20.804] E1017 13:08:20.528768   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:20.804] E1017 13:08:20.691578   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:20.873] E1017 13:08:20.872554   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:20.974] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:20.974] (BSuccessful
I1017 13:08:20.974] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:08:20.974] has:busybox0:busybox1:
I1017 13:08:20.975] Successful
I1017 13:08:20.975] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:08:20.975] has:Object 'Kind' is missing
I1017 13:08:21.044] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:21.157] (Bpod/busybox0 labeled
I1017 13:08:21.157] pod/busybox1 labeled
I1017 13:08:21.158] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:08:21.290] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1017 13:08:21.295] (BSuccessful
I1017 13:08:21.296] message:pod/busybox0 labeled
I1017 13:08:21.296] pod/busybox1 labeled
I1017 13:08:21.296] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:08:21.297] has:Object 'Kind' is missing
W1017 13:08:21.398] E1017 13:08:21.379967   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:21.499] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:21.531] (Bpod/busybox0 patched
I1017 13:08:21.532] pod/busybox1 patched
I1017 13:08:21.532] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:08:21.634] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1017 13:08:21.636] (BSuccessful
I1017 13:08:21.636] message:pod/busybox0 patched
I1017 13:08:21.637] pod/busybox1 patched
I1017 13:08:21.637] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:08:21.638] has:Object 'Kind' is missing
W1017 13:08:21.738] E1017 13:08:21.530613   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:21.739] E1017 13:08:21.693757   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:21.842] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:21.994] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:08:21.998] (BSuccessful
I1017 13:08:21.998] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1017 13:08:21.998] pod "busybox0" force deleted
I1017 13:08:21.998] pod "busybox1" force deleted
I1017 13:08:21.999] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1017 13:08:21.999] has:Object 'Kind' is missing
W1017 13:08:22.099] E1017 13:08:21.874184   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:22.200] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:08:22.318] (Breplicationcontroller/busybox0 created
I1017 13:08:22.322] replicationcontroller/busybox1 created
W1017 13:08:22.423] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1017 13:08:22.424] I1017 13:08:22.322990   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317696-23000", Name:"busybox0", UID:"6a0ea60f-98b6-4801-8f96-9c6e7be0c3ba", APIVersion:"v1", ResourceVersion:"1006", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-hx284
W1017 13:08:22.424] I1017 13:08:22.326974   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317696-23000", Name:"busybox1", UID:"5cb4a889-4a5d-470d-a22b-f0dd72fdc82d", APIVersion:"v1", ResourceVersion:"1008", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-hfx44
W1017 13:08:22.425] E1017 13:08:22.381860   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:22.525] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:22.558] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:22.651] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I1017 13:08:22.748] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I1017 13:08:22.943] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1017 13:08:23.045] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1017 13:08:23.048] (BSuccessful
I1017 13:08:23.048] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1017 13:08:23.048] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1017 13:08:23.049] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:08:23.049] has:Object 'Kind' is missing
I1017 13:08:23.148] horizontalpodautoscaler.autoscaling "busybox0" deleted
W1017 13:08:23.249] E1017 13:08:22.532008   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:23.249] E1017 13:08:22.695100   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:23.250] E1017 13:08:22.875581   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:23.350] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1017 13:08:23.383] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:23.494] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I1017 13:08:23.595] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I1017 13:08:23.835] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1017 13:08:23.956] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1017 13:08:23.959] (BSuccessful
I1017 13:08:23.960] message:service/busybox0 exposed
I1017 13:08:23.960] service/busybox1 exposed
I1017 13:08:23.961] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:08:23.962] has:Object 'Kind' is missing
W1017 13:08:24.063] E1017 13:08:23.384320   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:24.064] E1017 13:08:23.533689   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:24.064] E1017 13:08:23.696561   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:24.064] E1017 13:08:23.877116   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:24.165] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:24.174] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I1017 13:08:24.285] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I1017 13:08:24.532] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I1017 13:08:24.636] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I1017 13:08:24.639] (BSuccessful
I1017 13:08:24.640] message:replicationcontroller/busybox0 scaled
I1017 13:08:24.640] replicationcontroller/busybox1 scaled
I1017 13:08:24.640] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:08:24.641] has:Object 'Kind' is missing
I1017 13:08:24.740] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:24.978] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:08:24.982] (BSuccessful
I1017 13:08:24.983] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1017 13:08:24.983] replicationcontroller "busybox0" force deleted
I1017 13:08:24.983] replicationcontroller "busybox1" force deleted
I1017 13:08:24.984] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:08:24.984] has:Object 'Kind' is missing
W1017 13:08:25.084] E1017 13:08:24.385598   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:25.085] I1017 13:08:24.398330   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317696-23000", Name:"busybox0", UID:"6a0ea60f-98b6-4801-8f96-9c6e7be0c3ba", APIVersion:"v1", ResourceVersion:"1028", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-97lpw
W1017 13:08:25.085] I1017 13:08:24.411261   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317696-23000", Name:"busybox1", UID:"5cb4a889-4a5d-470d-a22b-f0dd72fdc82d", APIVersion:"v1", ResourceVersion:"1033", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-kxt9t
W1017 13:08:25.086] E1017 13:08:24.535558   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:25.086] E1017 13:08:24.698149   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:25.087] E1017 13:08:24.879100   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:25.187] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:08:25.356] (Bdeployment.apps/nginx1-deployment created
I1017 13:08:25.361] deployment.apps/nginx0-deployment created
W1017 13:08:25.462] I1017 13:08:25.361254   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317696-23000", Name:"nginx1-deployment", UID:"65472275-33ad-4fda-92ac-2cba5cbc45b1", APIVersion:"apps/v1", ResourceVersion:"1048", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W1017 13:08:25.462] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1017 13:08:25.463] I1017 13:08:25.370402   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317696-23000", Name:"nginx1-deployment-7bdbbfb5cf", UID:"75a5da50-2710-4e57-a6b3-b73ed8593fda", APIVersion:"apps/v1", ResourceVersion:"1049", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-k7dqw
W1017 13:08:25.463] I1017 13:08:25.371192   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317696-23000", Name:"nginx0-deployment", UID:"2578f210-c5ea-4d6f-bef4-9110e19f594b", APIVersion:"apps/v1", ResourceVersion:"1050", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W1017 13:08:25.464] I1017 13:08:25.375951   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317696-23000", Name:"nginx1-deployment-7bdbbfb5cf", UID:"75a5da50-2710-4e57-a6b3-b73ed8593fda", APIVersion:"apps/v1", ResourceVersion:"1049", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-vs4rk
W1017 13:08:25.464] I1017 13:08:25.376372   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317696-23000", Name:"nginx0-deployment-57c6bff7f6", UID:"f3203011-08c4-4859-85e0-36df10b11a34", APIVersion:"apps/v1", ResourceVersion:"1051", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-cr25s
W1017 13:08:25.464] I1017 13:08:25.382203   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317696-23000", Name:"nginx0-deployment-57c6bff7f6", UID:"f3203011-08c4-4859-85e0-36df10b11a34", APIVersion:"apps/v1", ResourceVersion:"1051", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-6jjs9
W1017 13:08:25.465] E1017 13:08:25.387962   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:25.538] E1017 13:08:25.537246   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:25.638] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1017 13:08:25.639] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1017 13:08:25.927] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1017 13:08:25.931] (BSuccessful
I1017 13:08:25.931] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I1017 13:08:25.931] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I1017 13:08:25.932] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 13:08:25.932] has:Object 'Kind' is missing
W1017 13:08:26.033] E1017 13:08:25.700056   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:26.034] E1017 13:08:25.880486   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:26.134] deployment.apps/nginx1-deployment paused
I1017 13:08:26.134] deployment.apps/nginx0-deployment paused
I1017 13:08:26.203] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1017 13:08:26.205] (BSuccessful
I1017 13:08:26.206] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 13:08:26.206] has:Object 'Kind' is missing
I1017 13:08:26.315] deployment.apps/nginx1-deployment resumed
I1017 13:08:26.321] deployment.apps/nginx0-deployment resumed
W1017 13:08:26.422] E1017 13:08:26.390208   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:26.523] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I1017 13:08:26.523] (BSuccessful
I1017 13:08:26.524] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 13:08:26.524] has:Object 'Kind' is missing
I1017 13:08:26.601] Successful
I1017 13:08:26.601] message:deployment.apps/nginx1-deployment 
I1017 13:08:26.602] REVISION  CHANGE-CAUSE
I1017 13:08:26.602] 1         <none>
I1017 13:08:26.602] 
I1017 13:08:26.602] deployment.apps/nginx0-deployment 
I1017 13:08:26.602] REVISION  CHANGE-CAUSE
I1017 13:08:26.602] 1         <none>
I1017 13:08:26.602] 
I1017 13:08:26.603] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 13:08:26.603] has:nginx0-deployment
I1017 13:08:26.604] Successful
I1017 13:08:26.604] message:deployment.apps/nginx1-deployment 
I1017 13:08:26.604] REVISION  CHANGE-CAUSE
I1017 13:08:26.604] 1         <none>
I1017 13:08:26.604] 
I1017 13:08:26.604] deployment.apps/nginx0-deployment 
I1017 13:08:26.604] REVISION  CHANGE-CAUSE
I1017 13:08:26.604] 1         <none>
I1017 13:08:26.604] 
I1017 13:08:26.605] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 13:08:26.605] has:nginx1-deployment
I1017 13:08:26.606] Successful
I1017 13:08:26.606] message:deployment.apps/nginx1-deployment 
I1017 13:08:26.606] REVISION  CHANGE-CAUSE
I1017 13:08:26.606] 1         <none>
I1017 13:08:26.606] 
I1017 13:08:26.606] deployment.apps/nginx0-deployment 
I1017 13:08:26.607] REVISION  CHANGE-CAUSE
I1017 13:08:26.607] 1         <none>
I1017 13:08:26.607] 
I1017 13:08:26.607] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1017 13:08:26.607] has:Object 'Kind' is missing
I1017 13:08:26.696] deployment.apps "nginx1-deployment" force deleted
I1017 13:08:26.702] deployment.apps "nginx0-deployment" force deleted
W1017 13:08:26.803] E1017 13:08:26.538817   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:26.804] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1017 13:08:26.804] E1017 13:08:26.702186   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:26.805] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W1017 13:08:26.883] E1017 13:08:26.882253   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:27.393] E1017 13:08:27.392667   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:27.541] E1017 13:08:27.540636   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:27.705] E1017 13:08:27.704333   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:27.833] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:08:28.052] (Breplicationcontroller/busybox0 created
I1017 13:08:28.057] replicationcontroller/busybox1 created
W1017 13:08:28.158] E1017 13:08:27.883302   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:28.160] I1017 13:08:28.056034   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317696-23000", Name:"busybox0", UID:"ef30cfa1-9911-4258-995a-8d3e0fa95d59", APIVersion:"v1", ResourceVersion:"1097", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-7hpql
W1017 13:08:28.160] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1017 13:08:28.161] I1017 13:08:28.063071   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317696-23000", Name:"busybox1", UID:"c366a90f-eb79-48a1-910b-8c1910e6cce5", APIVersion:"v1", ResourceVersion:"1099", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-p7bqb
I1017 13:08:28.261] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1017 13:08:28.337] (BSuccessful
I1017 13:08:28.338] message:no rollbacker has been implemented for "ReplicationController"
I1017 13:08:28.338] no rollbacker has been implemented for "ReplicationController"
I1017 13:08:28.338] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:08:28.338] has:no rollbacker has been implemented for "ReplicationController"
I1017 13:08:28.341] Successful
I1017 13:08:28.341] message:no rollbacker has been implemented for "ReplicationController"
I1017 13:08:28.342] no rollbacker has been implemented for "ReplicationController"
I1017 13:08:28.342] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:08:28.343] has:Object 'Kind' is missing
W1017 13:08:28.443] E1017 13:08:28.394256   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:28.543] E1017 13:08:28.542463   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:28.644] Successful
I1017 13:08:28.645] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:08:28.645] error: replicationcontrollers "busybox0" pausing is not supported
I1017 13:08:28.646] error: replicationcontrollers "busybox1" pausing is not supported
I1017 13:08:28.647] has:Object 'Kind' is missing
I1017 13:08:28.647] Successful
I1017 13:08:28.647] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:08:28.648] error: replicationcontrollers "busybox0" pausing is not supported
I1017 13:08:28.648] error: replicationcontrollers "busybox1" pausing is not supported
I1017 13:08:28.648] has:replicationcontrollers "busybox0" pausing is not supported
I1017 13:08:28.649] Successful
I1017 13:08:28.649] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:08:28.650] error: replicationcontrollers "busybox0" pausing is not supported
I1017 13:08:28.650] error: replicationcontrollers "busybox1" pausing is not supported
I1017 13:08:28.650] has:replicationcontrollers "busybox1" pausing is not supported
I1017 13:08:28.650] Successful
I1017 13:08:28.651] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:08:28.651] error: replicationcontrollers "busybox0" resuming is not supported
I1017 13:08:28.651] error: replicationcontrollers "busybox1" resuming is not supported
I1017 13:08:28.651] has:Object 'Kind' is missing
I1017 13:08:28.651] Successful
I1017 13:08:28.652] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:08:28.652] error: replicationcontrollers "busybox0" resuming is not supported
I1017 13:08:28.652] error: replicationcontrollers "busybox1" resuming is not supported
I1017 13:08:28.652] has:replicationcontrollers "busybox0" resuming is not supported
I1017 13:08:28.652] Successful
I1017 13:08:28.653] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1017 13:08:28.653] error: replicationcontrollers "busybox0" resuming is not supported
I1017 13:08:28.653] error: replicationcontrollers "busybox1" resuming is not supported
I1017 13:08:28.653] has:replicationcontrollers "busybox1" resuming is not supported
I1017 13:08:28.712] replicationcontroller "busybox0" force deleted
I1017 13:08:28.717] replicationcontroller "busybox1" force deleted
W1017 13:08:28.818] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1017 13:08:28.819] E1017 13:08:28.706119   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:28.819] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W1017 13:08:28.886] E1017 13:08:28.885694   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:29.396] E1017 13:08:29.396220   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:29.545] E1017 13:08:29.544132   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:29.709] E1017 13:08:29.708359   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:29.809] Recording: run_namespace_tests
I1017 13:08:29.810] Running command: run_namespace_tests
I1017 13:08:29.810] 
I1017 13:08:29.810] +++ Running case: test-cmd.run_namespace_tests 
I1017 13:08:29.810] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:08:29.810] +++ command: run_namespace_tests
I1017 13:08:29.810] +++ [1017 13:08:29] Testing kubectl(v1:namespaces)
I1017 13:08:29.883] namespace/my-namespace created
W1017 13:08:29.984] E1017 13:08:29.886594   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:30.085] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1017 13:08:30.101] (Bnamespace "my-namespace" deleted
W1017 13:08:30.398] E1017 13:08:30.397851   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:30.546] E1017 13:08:30.546119   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:30.710] E1017 13:08:30.709924   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:30.888] E1017 13:08:30.888323   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:31.101] I1017 13:08:31.100403   53264 shared_informer.go:197] Waiting for caches to sync for resource quota
W1017 13:08:31.101] I1017 13:08:31.101441   53264 shared_informer.go:204] Caches are synced for resource quota 
W1017 13:08:31.400] E1017 13:08:31.400093   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:31.510] I1017 13:08:31.509270   53264 shared_informer.go:197] Waiting for caches to sync for garbage collector
W1017 13:08:31.510] I1017 13:08:31.509389   53264 shared_informer.go:204] Caches are synced for garbage collector 
W1017 13:08:31.549] E1017 13:08:31.548407   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:31.712] E1017 13:08:31.711714   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:31.891] E1017 13:08:31.890336   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:32.403] E1017 13:08:32.402366   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:32.551] E1017 13:08:32.550634   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:32.714] E1017 13:08:32.713651   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:32.893] E1017 13:08:32.892551   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:33.404] E1017 13:08:33.404004   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:33.553] E1017 13:08:33.552594   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:33.716] E1017 13:08:33.715310   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:33.895] E1017 13:08:33.894656   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:34.406] E1017 13:08:34.405541   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:34.554] E1017 13:08:34.554003   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:34.717] E1017 13:08:34.716843   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:34.896] E1017 13:08:34.896159   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:35.243] namespace/my-namespace condition met
I1017 13:08:35.384] Successful
I1017 13:08:35.385] message:Error from server (NotFound): namespaces "my-namespace" not found
I1017 13:08:35.385] has: not found
I1017 13:08:35.485] namespace/my-namespace created
W1017 13:08:35.586] E1017 13:08:35.408615   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:35.587] E1017 13:08:35.556050   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:35.687] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1017 13:08:35.893] (BSuccessful
I1017 13:08:35.893] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I1017 13:08:35.894] namespace "kube-node-lease" deleted
I1017 13:08:35.894] namespace "my-namespace" deleted
I1017 13:08:35.894] namespace "namespace-1571317530-341" deleted
... skipping 27 lines ...
I1017 13:08:35.898] namespace "namespace-1571317647-28842" deleted
I1017 13:08:35.899] namespace "namespace-1571317648-13652" deleted
I1017 13:08:35.899] namespace "namespace-1571317650-8133" deleted
I1017 13:08:35.899] namespace "namespace-1571317652-20009" deleted
I1017 13:08:35.899] namespace "namespace-1571317696-22788" deleted
I1017 13:08:35.899] namespace "namespace-1571317696-23000" deleted
I1017 13:08:35.899] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I1017 13:08:35.900] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I1017 13:08:35.900] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I1017 13:08:35.900] has:warning: deleting cluster-scoped resources
I1017 13:08:35.900] Successful
I1017 13:08:35.900] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I1017 13:08:35.900] namespace "kube-node-lease" deleted
I1017 13:08:35.901] namespace "my-namespace" deleted
I1017 13:08:35.901] namespace "namespace-1571317530-341" deleted
... skipping 27 lines ...
I1017 13:08:35.905] namespace "namespace-1571317647-28842" deleted
I1017 13:08:35.905] namespace "namespace-1571317648-13652" deleted
I1017 13:08:35.905] namespace "namespace-1571317650-8133" deleted
I1017 13:08:35.905] namespace "namespace-1571317652-20009" deleted
I1017 13:08:35.905] namespace "namespace-1571317696-22788" deleted
I1017 13:08:35.906] namespace "namespace-1571317696-23000" deleted
I1017 13:08:35.906] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I1017 13:08:35.906] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I1017 13:08:35.906] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I1017 13:08:35.906] has:namespace "my-namespace" deleted
W1017 13:08:36.007] E1017 13:08:35.718205   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:36.008] E1017 13:08:35.898827   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:36.108] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I1017 13:08:36.171] (Bnamespace/other created
I1017 13:08:36.309] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I1017 13:08:36.450] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:08:36.661] (Bpod/valid-pod created
W1017 13:08:36.762] E1017 13:08:36.410671   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:36.762] E1017 13:08:36.558357   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:36.763] E1017 13:08:36.719972   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:36.863] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:08:36.908] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:08:37.009] (BSuccessful
I1017 13:08:37.010] message:error: a resource cannot be retrieved by name across all namespaces
I1017 13:08:37.010] has:a resource cannot be retrieved by name across all namespaces
I1017 13:08:37.112] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1017 13:08:37.209] (Bpod "valid-pod" force deleted
W1017 13:08:37.310] E1017 13:08:36.900882   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:37.310] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1017 13:08:37.411] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:08:37.445] (Bnamespace "other" deleted
W1017 13:08:37.546] E1017 13:08:37.412333   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:37.560] E1017 13:08:37.559847   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:37.722] E1017 13:08:37.721839   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:37.833] I1017 13:08:37.832872   53264 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1571317696-23000
W1017 13:08:37.837] I1017 13:08:37.836872   53264 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1571317696-23000
W1017 13:08:37.908] E1017 13:08:37.907769   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:38.415] E1017 13:08:38.414233   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:38.562] E1017 13:08:38.561821   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:38.724] E1017 13:08:38.723638   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:38.910] E1017 13:08:38.909357   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:39.416] E1017 13:08:39.415746   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:39.564] E1017 13:08:39.564026   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:39.725] E1017 13:08:39.725306   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:39.911] E1017 13:08:39.911076   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:40.418] E1017 13:08:40.417421   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:40.566] E1017 13:08:40.565649   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:40.727] E1017 13:08:40.726786   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:40.912] E1017 13:08:40.912134   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:41.420] E1017 13:08:41.419440   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:41.568] E1017 13:08:41.567671   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:41.729] E1017 13:08:41.728428   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:41.915] E1017 13:08:41.914946   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:42.423] E1017 13:08:42.422642   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:42.569] E1017 13:08:42.569113   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:42.670] +++ exit code: 0
I1017 13:08:42.671] Recording: run_secrets_test
I1017 13:08:42.671] Running command: run_secrets_test
I1017 13:08:42.671] 
I1017 13:08:42.671] +++ Running case: test-cmd.run_secrets_test 
I1017 13:08:42.671] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 37 lines ...
I1017 13:08:42.954] metadata:
I1017 13:08:42.954]   creationTimestamp: null
I1017 13:08:42.954]   name: test
I1017 13:08:42.955] has not:example.com
I1017 13:08:43.051] core.sh:725: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-secrets\" }}found{{end}}{{end}}:: :
I1017 13:08:43.131] (Bnamespace/test-secrets created
W1017 13:08:43.231] E1017 13:08:42.731130   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:43.232] E1017 13:08:42.917619   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:43.232] I1017 13:08:42.935559   69240 loader.go:375] Config loaded from file:  /tmp/tmp.Tf9UxSIjmH/.kube/config
I1017 13:08:43.333] core.sh:729: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets
I1017 13:08:43.360] (Bcore.sh:733: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:08:43.445] (Bsecret/test-secret created
I1017 13:08:43.547] core.sh:737: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I1017 13:08:43.645] (Bcore.sh:738: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
I1017 13:08:43.833] (Bsecret "test-secret" deleted
W1017 13:08:43.934] E1017 13:08:43.424023   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:43.934] E1017 13:08:43.570460   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:43.935] E1017 13:08:43.732195   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:43.935] E1017 13:08:43.918952   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:44.036] core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:08:44.036] (Bsecret/test-secret created
I1017 13:08:44.146] core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I1017 13:08:44.257] (Bcore.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
I1017 13:08:44.454] (Bsecret "test-secret" deleted
W1017 13:08:44.556] E1017 13:08:44.425260   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:44.572] E1017 13:08:44.571580   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:44.672] core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:08:44.673] (Bsecret/test-secret created
I1017 13:08:44.740] core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I1017 13:08:44.853] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I1017 13:08:44.956] (Bsecret "test-secret" deleted
W1017 13:08:45.057] E1017 13:08:44.733570   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:45.057] E1017 13:08:44.920702   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:45.158] secret/test-secret created
I1017 13:08:45.188] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I1017 13:08:45.312] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I1017 13:08:45.420] (Bsecret "test-secret" deleted
W1017 13:08:45.521] I1017 13:08:45.331198   53264 namespace_controller.go:185] Namespace has been deleted my-namespace
W1017 13:08:45.521] E1017 13:08:45.426677   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:45.574] E1017 13:08:45.573339   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:45.674] secret/secret-string-data created
I1017 13:08:45.739] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I1017 13:08:45.844] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I1017 13:08:45.953] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I1017 13:08:46.057] (Bsecret "secret-string-data" deleted
W1017 13:08:46.158] E1017 13:08:45.735381   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:46.159] E1017 13:08:45.922463   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:46.159] I1017 13:08:45.954370   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317533-25681
W1017 13:08:46.159] I1017 13:08:45.956889   53264 namespace_controller.go:185] Namespace has been deleted kube-node-lease
W1017 13:08:46.160] I1017 13:08:45.957337   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317545-16875
W1017 13:08:46.160] I1017 13:08:45.963901   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317550-8875
W1017 13:08:46.160] I1017 13:08:45.968636   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317549-16443
W1017 13:08:46.160] I1017 13:08:45.976690   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317530-341
... skipping 12 lines ...
W1017 13:08:46.267] I1017 13:08:46.267192   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317597-26890
W1017 13:08:46.295] I1017 13:08:46.294370   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317600-15653
I1017 13:08:46.395] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:08:46.396] (Bsecret "test-secret" deleted
I1017 13:08:46.488] namespace "test-secrets" deleted
W1017 13:08:46.589] I1017 13:08:46.413207   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317603-29021
W1017 13:08:46.589] E1017 13:08:46.428413   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:46.590] I1017 13:08:46.444933   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317623-3362
W1017 13:08:46.590] I1017 13:08:46.469079   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317624-19868
W1017 13:08:46.590] I1017 13:08:46.483772   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317641-29520
W1017 13:08:46.590] I1017 13:08:46.495393   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317604-22361
W1017 13:08:46.590] I1017 13:08:46.509710   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317647-27965
W1017 13:08:46.590] I1017 13:08:46.509786   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317642-28403
W1017 13:08:46.591] I1017 13:08:46.514761   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317625-5013
W1017 13:08:46.591] I1017 13:08:46.519299   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317647-28842
W1017 13:08:46.591] I1017 13:08:46.538239   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317634-16453
W1017 13:08:46.591] E1017 13:08:46.575167   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:46.592] I1017 13:08:46.591873   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317648-13652
W1017 13:08:46.615] I1017 13:08:46.614507   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317650-8133
W1017 13:08:46.635] I1017 13:08:46.634880   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317652-20009
W1017 13:08:46.662] I1017 13:08:46.661601   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317696-22788
W1017 13:08:46.703] I1017 13:08:46.702547   53264 namespace_controller.go:185] Namespace has been deleted namespace-1571317696-23000
W1017 13:08:46.737] E1017 13:08:46.736903   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:46.924] E1017 13:08:46.923929   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:47.431] E1017 13:08:47.430330   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:47.547] I1017 13:08:47.546605   53264 namespace_controller.go:185] Namespace has been deleted other
W1017 13:08:47.577] E1017 13:08:47.576517   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:47.739] E1017 13:08:47.738282   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:47.925] E1017 13:08:47.925257   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:48.433] E1017 13:08:48.432312   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:48.579] E1017 13:08:48.578677   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:48.740] E1017 13:08:48.739753   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:48.928] E1017 13:08:48.927427   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:49.434] E1017 13:08:49.433907   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:49.580] E1017 13:08:49.580212   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:49.741] E1017 13:08:49.741262   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:49.929] E1017 13:08:49.928815   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:50.436] E1017 13:08:50.435647   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:50.582] E1017 13:08:50.582183   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:50.744] E1017 13:08:50.743474   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:50.932] E1017 13:08:50.931310   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:51.439] E1017 13:08:51.438284   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:51.585] E1017 13:08:51.584076   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:51.685] +++ exit code: 0
I1017 13:08:51.704] Recording: run_configmap_tests
I1017 13:08:51.704] Running command: run_configmap_tests
I1017 13:08:51.743] 
I1017 13:08:51.748] +++ Running case: test-cmd.run_configmap_tests 
I1017 13:08:51.753] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:08:51.758] +++ command: run_configmap_tests
I1017 13:08:51.778] +++ [1017 13:08:51] Creating namespace namespace-1571317731-26195
W1017 13:08:51.879] E1017 13:08:51.745178   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:51.933] E1017 13:08:51.932875   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:52.034] namespace/namespace-1571317731-26195 created
I1017 13:08:52.034] Context "test" modified.
I1017 13:08:52.035] +++ [1017 13:08:51] Testing configmaps
I1017 13:08:52.239] configmap/test-configmap created
I1017 13:08:52.362] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I1017 13:08:52.450] (Bconfigmap "test-configmap" deleted
W1017 13:08:52.551] E1017 13:08:52.440958   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:52.586] E1017 13:08:52.586017   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:52.687] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I1017 13:08:52.688] (Bnamespace/test-configmaps created
I1017 13:08:52.780] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
I1017 13:08:52.892] (Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
I1017 13:08:53.000] (Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
I1017 13:08:53.095] (Bconfigmap/test-configmap created
I1017 13:08:53.185] configmap/test-binary-configmap created
W1017 13:08:53.286] E1017 13:08:52.746693   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:53.286] E1017 13:08:52.934304   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:53.387] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I1017 13:08:53.416] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I1017 13:08:53.715] (Bconfigmap "test-configmap" deleted
W1017 13:08:53.816] E1017 13:08:53.442360   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:53.817] E1017 13:08:53.589023   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:53.817] E1017 13:08:53.748241   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:53.918] configmap "test-binary-configmap" deleted
I1017 13:08:53.926] namespace "test-configmaps" deleted
W1017 13:08:54.027] E1017 13:08:53.935529   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:54.444] E1017 13:08:54.444148   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:54.591] E1017 13:08:54.590440   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:54.750] E1017 13:08:54.749829   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:54.937] E1017 13:08:54.937094   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:55.446] E1017 13:08:55.445396   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:55.592] E1017 13:08:55.591795   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:55.752] E1017 13:08:55.751342   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:55.939] E1017 13:08:55.938354   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:56.447] E1017 13:08:56.446956   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:56.594] E1017 13:08:56.593585   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:56.613] I1017 13:08:56.613296   53264 namespace_controller.go:185] Namespace has been deleted test-secrets
W1017 13:08:56.754] E1017 13:08:56.753605   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:56.943] E1017 13:08:56.941632   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:57.453] E1017 13:08:57.451253   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:57.599] E1017 13:08:57.597393   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:57.759] E1017 13:08:57.757697   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:57.946] E1017 13:08:57.945204   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:58.456] E1017 13:08:58.454693   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:58.602] E1017 13:08:58.600187   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:58.762] E1017 13:08:58.761323   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:08:58.947] E1017 13:08:58.946989   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:59.103] +++ exit code: 0
I1017 13:08:59.163] Recording: run_client_config_tests
I1017 13:08:59.163] Running command: run_client_config_tests
I1017 13:08:59.205] 
I1017 13:08:59.211] +++ Running case: test-cmd.run_client_config_tests 
I1017 13:08:59.215] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:08:59.220] +++ command: run_client_config_tests
I1017 13:08:59.241] +++ [1017 13:08:59] Creating namespace namespace-1571317739-11807
I1017 13:08:59.372] namespace/namespace-1571317739-11807 created
W1017 13:08:59.473] E1017 13:08:59.457068   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:08:59.573] Context "test" modified.
I1017 13:08:59.574] +++ [1017 13:08:59] Testing client config
I1017 13:08:59.631] Successful
I1017 13:08:59.632] message:error: stat missing: no such file or directory
I1017 13:08:59.632] has:missing: no such file or directory
I1017 13:08:59.729] Successful
I1017 13:08:59.730] message:error: stat missing: no such file or directory
I1017 13:08:59.730] has:missing: no such file or directory
I1017 13:08:59.811] Successful
I1017 13:08:59.811] message:error: stat missing: no such file or directory
I1017 13:08:59.812] has:missing: no such file or directory
I1017 13:08:59.891] Successful
I1017 13:08:59.892] message:Error in configuration: context was not found for specified context: missing-context
I1017 13:08:59.892] has:context was not found for specified context: missing-context
I1017 13:08:59.970] Successful
I1017 13:08:59.970] message:error: no server found for cluster "missing-cluster"
I1017 13:08:59.970] has:no server found for cluster "missing-cluster"
I1017 13:09:00.050] Successful
I1017 13:09:00.051] message:error: auth info "missing-user" does not exist
I1017 13:09:00.051] has:auth info "missing-user" does not exist
W1017 13:09:00.151] E1017 13:08:59.602489   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:00.152] E1017 13:08:59.762782   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:00.152] E1017 13:08:59.948693   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:00.253] Successful
I1017 13:09:00.253] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I1017 13:09:00.253] has:error loading config file
I1017 13:09:00.291] Successful
I1017 13:09:00.292] message:error: stat missing-config: no such file or directory
I1017 13:09:00.292] has:no such file or directory
I1017 13:09:00.310] +++ exit code: 0
I1017 13:09:00.355] Recording: run_service_accounts_tests
I1017 13:09:00.356] Running command: run_service_accounts_tests
I1017 13:09:00.385] 
I1017 13:09:00.388] +++ Running case: test-cmd.run_service_accounts_tests 
I1017 13:09:00.392] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:09:00.395] +++ command: run_service_accounts_tests
I1017 13:09:00.408] +++ [1017 13:09:00] Creating namespace namespace-1571317740-12314
I1017 13:09:00.493] namespace/namespace-1571317740-12314 created
I1017 13:09:00.571] Context "test" modified.
I1017 13:09:00.580] +++ [1017 13:09:00] Testing service accounts
W1017 13:09:00.682] E1017 13:09:00.458606   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:00.682] E1017 13:09:00.604074   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:00.765] E1017 13:09:00.764246   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:00.865] core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
I1017 13:09:00.866] (Bnamespace/test-service-accounts created
I1017 13:09:00.907] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I1017 13:09:00.992] (Bserviceaccount/test-service-account created
I1017 13:09:01.097] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I1017 13:09:01.195] (Bserviceaccount "test-service-account" deleted
I1017 13:09:01.290] namespace "test-service-accounts" deleted
W1017 13:09:01.391] E1017 13:09:00.950089   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:01.461] E1017 13:09:01.460462   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:01.606] E1017 13:09:01.605568   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:01.766] E1017 13:09:01.765904   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:01.952] E1017 13:09:01.952035   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:02.462] E1017 13:09:02.462043   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:02.607] E1017 13:09:02.606932   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:02.768] E1017 13:09:02.767658   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:02.954] E1017 13:09:02.953839   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:03.464] E1017 13:09:03.463967   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:03.609] E1017 13:09:03.608322   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:03.769] E1017 13:09:03.769149   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:03.956] E1017 13:09:03.955569   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:04.063] I1017 13:09:04.062786   53264 namespace_controller.go:185] Namespace has been deleted test-configmaps
W1017 13:09:04.466] E1017 13:09:04.465686   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:04.610] E1017 13:09:04.609781   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:04.771] E1017 13:09:04.771050   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:04.957] E1017 13:09:04.956934   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:05.469] E1017 13:09:05.468487   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:05.611] E1017 13:09:05.611302   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:05.773] E1017 13:09:05.772489   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:05.959] E1017 13:09:05.958491   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:06.464] +++ exit code: 0
I1017 13:09:06.510] Recording: run_job_tests
I1017 13:09:06.511] Running command: run_job_tests
I1017 13:09:06.540] 
I1017 13:09:06.544] +++ Running case: test-cmd.run_job_tests 
I1017 13:09:06.548] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:09:06.553] +++ command: run_job_tests
I1017 13:09:06.567] +++ [1017 13:09:06] Creating namespace namespace-1571317746-2286
I1017 13:09:06.652] namespace/namespace-1571317746-2286 created
I1017 13:09:06.733] Context "test" modified.
I1017 13:09:06.742] +++ [1017 13:09:06] Testing job
W1017 13:09:06.842] E1017 13:09:06.471888   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:06.843] E1017 13:09:06.612997   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:06.843] E1017 13:09:06.773995   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:06.943] batch.sh:30: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: :
I1017 13:09:06.944] (Bnamespace/test-jobs created
I1017 13:09:07.049] batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
I1017 13:09:07.145] (Bcronjob.batch/pi created
I1017 13:09:07.249] batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
I1017 13:09:07.335] (BNAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
I1017 13:09:07.336] pi     59 23 31 2 *   False     0        <none>          0s
W1017 13:09:07.436] E1017 13:09:06.959650   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:07.437] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1017 13:09:07.474] E1017 13:09:07.473388   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:07.574] Name:                          pi
I1017 13:09:07.575] Namespace:                     test-jobs
I1017 13:09:07.575] Labels:                        run=pi
I1017 13:09:07.575] Annotations:                   <none>
I1017 13:09:07.575] Schedule:                      59 23 31 2 *
I1017 13:09:07.575] Concurrency Policy:            Allow
I1017 13:09:07.575] Suspend:                       False
I1017 13:09:07.575] Successful Job History Limit:  3
I1017 13:09:07.575] Failed Job History Limit:      1
I1017 13:09:07.575] Starting Deadline Seconds:     <unset>
I1017 13:09:07.575] Selector:                      <unset>
I1017 13:09:07.576] Parallelism:                   <unset>
I1017 13:09:07.576] Completions:                   <unset>
I1017 13:09:07.576] Pod Template:
I1017 13:09:07.576]   Labels:  run=pi
... skipping 18 lines ...
I1017 13:09:07.577] Events:              <none>
I1017 13:09:07.577] Successful
I1017 13:09:07.577] message:job.batch/test-job
I1017 13:09:07.578] has:job.batch/test-job
I1017 13:09:07.640] batch.sh:48: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: 
I1017 13:09:07.738] (Bjob.batch/test-job created
W1017 13:09:07.839] E1017 13:09:07.614459   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:07.839] I1017 13:09:07.732210   53264 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"3e164520-0dd6-4f67-ba1c-212bf333b778", APIVersion:"batch/v1", ResourceVersion:"1421", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-wpmv2
W1017 13:09:07.840] E1017 13:09:07.775270   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:07.914] I1017 13:09:07.913340   53264 event.go:262] Event(v1.ObjectReference{Kind:"CronJob", Namespace:"test-jobs", Name:"pi", UID:"0e7724bd-cc3d-4c99-9181-0b8a21cfc25f", APIVersion:"batch/v1beta1", ResourceVersion:"1420", FieldPath:""}): type: 'Warning' reason: 'UnexpectedJob' Saw a job that the controller did not create or forgot: test-job
W1017 13:09:07.920] E1017 13:09:07.919822   53264 cronjob_controller.go:272] Cannot determine if test-jobs/pi needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew
W1017 13:09:07.921] I1017 13:09:07.920182   53264 event.go:262] Event(v1.ObjectReference{Kind:"CronJob", Namespace:"test-jobs", Name:"pi", UID:"0e7724bd-cc3d-4c99-9181-0b8a21cfc25f", APIVersion:"batch/v1beta1", ResourceVersion:"1420", FieldPath:""}): type: 'Warning' reason: 'FailedNeedsStart' Cannot determine if job needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew
W1017 13:09:07.961] E1017 13:09:07.961043   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:08.062] batch.sh:53: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job
I1017 13:09:08.062] (BNAME       COMPLETIONS   DURATION   AGE
I1017 13:09:08.062] test-job   0/1           0s         0s
I1017 13:09:08.062] Name:           test-job
I1017 13:09:08.063] Namespace:      test-jobs
I1017 13:09:08.063] Selector:       controller-uid=3e164520-0dd6-4f67-ba1c-212bf333b778
... skipping 2 lines ...
I1017 13:09:08.063]                 run=pi
I1017 13:09:08.064] Annotations:    cronjob.kubernetes.io/instantiate: manual
I1017 13:09:08.064] Controlled By:  CronJob/pi
I1017 13:09:08.064] Parallelism:    1
I1017 13:09:08.064] Completions:    1
I1017 13:09:08.064] Start Time:     Thu, 17 Oct 2019 13:09:07 +0000
I1017 13:09:08.065] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I1017 13:09:08.065] Pod Template:
I1017 13:09:08.065]   Labels:  controller-uid=3e164520-0dd6-4f67-ba1c-212bf333b778
I1017 13:09:08.065]            job-name=test-job
I1017 13:09:08.065]            run=pi
I1017 13:09:08.065]   Containers:
I1017 13:09:08.065]    pi:
... skipping 15 lines ...
I1017 13:09:08.066]   Type    Reason            Age   From            Message
I1017 13:09:08.066]   ----    ------            ----  ----            -------
I1017 13:09:08.066]   Normal  SuccessfulCreate  1s    job-controller  Created pod: test-job-wpmv2
I1017 13:09:08.112] job.batch "test-job" deleted
I1017 13:09:08.204] cronjob.batch "pi" deleted
I1017 13:09:08.298] namespace "test-jobs" deleted
W1017 13:09:08.475] E1017 13:09:08.474921   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:08.616] E1017 13:09:08.616311   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:08.777] E1017 13:09:08.776503   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:08.963] E1017 13:09:08.962473   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:09.478] E1017 13:09:09.477609   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:09.618] E1017 13:09:09.617957   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:09.778] E1017 13:09:09.777900   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:09.964] E1017 13:09:09.963927   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:10.479] E1017 13:09:10.479208   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:10.620] E1017 13:09:10.619579   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:10.779] E1017 13:09:10.779232   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:10.966] E1017 13:09:10.965610   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:11.433] I1017 13:09:11.432645   53264 namespace_controller.go:185] Namespace has been deleted test-service-accounts
W1017 13:09:11.482] E1017 13:09:11.481276   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:11.621] E1017 13:09:11.621060   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:11.781] E1017 13:09:11.780804   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:11.967] E1017 13:09:11.967164   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:12.483] E1017 13:09:12.482869   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:12.623] E1017 13:09:12.622577   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:12.783] E1017 13:09:12.782594   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:12.969] E1017 13:09:12.968815   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:13.476] +++ exit code: 0
I1017 13:09:13.526] Recording: run_create_job_tests
I1017 13:09:13.526] Running command: run_create_job_tests
I1017 13:09:13.561] 
I1017 13:09:13.565] +++ Running case: test-cmd.run_create_job_tests 
I1017 13:09:13.571] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:09:13.574] +++ command: run_create_job_tests
I1017 13:09:13.592] +++ [1017 13:09:13] Creating namespace namespace-1571317753-31923
I1017 13:09:13.677] namespace/namespace-1571317753-31923 created
I1017 13:09:13.765] Context "test" modified.
I1017 13:09:13.864] job.batch/test-job created
W1017 13:09:13.965] E1017 13:09:13.484591   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:13.966] E1017 13:09:13.624015   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:13.966] E1017 13:09:13.784217   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:13.967] I1017 13:09:13.862491   53264 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571317753-31923", Name:"test-job", UID:"49a750bb-9a3b-4ab5-aa43-2cdbfa1bd5da", APIVersion:"batch/v1", ResourceVersion:"1443", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-7jdcr
W1017 13:09:13.971] E1017 13:09:13.970357   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:14.071] create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
I1017 13:09:14.074] (Bjob.batch "test-job" deleted
W1017 13:09:14.178] I1017 13:09:14.177270   53264 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571317753-31923", Name:"test-job-pi", UID:"7fac0a5f-c4c6-490f-b3be-6d75a6ae7909", APIVersion:"batch/v1", ResourceVersion:"1451", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-zs9p6
I1017 13:09:14.279] job.batch/test-job-pi created
I1017 13:09:14.304] create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
I1017 13:09:14.401] (Bjob.batch "test-job-pi" deleted
I1017 13:09:14.502] cronjob.batch/test-pi created
W1017 13:09:14.603] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1017 13:09:14.604] E1017 13:09:14.485910   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:14.605] I1017 13:09:14.600671   53264 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571317753-31923", Name:"my-pi", UID:"8a812076-7030-4d28-a186-461fc50f350a", APIVersion:"batch/v1", ResourceVersion:"1460", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-sf8zh
W1017 13:09:14.625] E1017 13:09:14.625251   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:14.726] job.batch/my-pi created
I1017 13:09:14.727] Successful
I1017 13:09:14.727] message:[perl -Mbignum=bpi -wle print bpi(10)]
I1017 13:09:14.727] has:perl -Mbignum=bpi -wle print bpi(10)
I1017 13:09:14.793] job.batch "my-pi" deleted
W1017 13:09:14.893] E1017 13:09:14.785481   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:14.974] E1017 13:09:14.973311   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:15.074] cronjob.batch "test-pi" deleted
I1017 13:09:15.075] +++ exit code: 0
I1017 13:09:15.075] Recording: run_pod_templates_tests
I1017 13:09:15.075] Running command: run_pod_templates_tests
I1017 13:09:15.075] 
I1017 13:09:15.075] +++ Running case: test-cmd.run_pod_templates_tests 
... skipping 2 lines ...
I1017 13:09:15.076] +++ [1017 13:09:15] Creating namespace namespace-1571317755-9088
I1017 13:09:15.122] namespace/namespace-1571317755-9088 created
I1017 13:09:15.214] Context "test" modified.
I1017 13:09:15.225] +++ [1017 13:09:15] Testing pod templates
I1017 13:09:15.347] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:15.545] (Bpodtemplate/nginx created
W1017 13:09:15.646] E1017 13:09:15.488868   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:15.647] I1017 13:09:15.542867   49669 controller.go:606] quota admission added evaluator for: podtemplates
W1017 13:09:15.647] E1017 13:09:15.626459   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:15.748] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1017 13:09:15.748] (BNAME    CONTAINERS   IMAGES   POD LABELS
I1017 13:09:15.749] nginx   nginx        nginx    name=nginx
W1017 13:09:15.851] E1017 13:09:15.786850   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:15.975] E1017 13:09:15.974783   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:16.076] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1017 13:09:16.076] (Bpodtemplate "nginx" deleted
I1017 13:09:16.194] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:16.210] (B+++ exit code: 0
I1017 13:09:16.251] Recording: run_service_tests
I1017 13:09:16.251] Running command: run_service_tests
I1017 13:09:16.280] 
I1017 13:09:16.283] +++ Running case: test-cmd.run_service_tests 
I1017 13:09:16.286] +++ working dir: /go/src/k8s.io/kubernetes
I1017 13:09:16.289] +++ command: run_service_tests
I1017 13:09:16.371] Context "test" modified.
I1017 13:09:16.380] +++ [1017 13:09:16] Testing kubectl(v1:services)
W1017 13:09:16.490] E1017 13:09:16.489979   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:16.591] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 13:09:16.694] (Bservice/redis-master created
W1017 13:09:16.794] E1017 13:09:16.628158   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:16.795] E1017 13:09:16.788499   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:16.895] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1017 13:09:16.912] (B
I1017 13:09:16.917] core.sh:864: FAIL!
I1017 13:09:16.917] Describe services redis-master
I1017 13:09:16.917]   Expected Match: Name:
I1017 13:09:16.917]   Not found in:
I1017 13:09:16.917] Name:              redis-master
I1017 13:09:16.918] Namespace:         default
I1017 13:09:16.918] Labels:            app=redis
... skipping 39 lines ...
I1017 13:09:17.122] IP:                10.0.0.87
I1017 13:09:17.122] Port:              <unset>  6379/TCP
I1017 13:09:17.122] TargetPort:        6379/TCP
I1017 13:09:17.122] Endpoints:         <none>
I1017 13:09:17.122] Session Affinity:  None
I1017 13:09:17.122] (B
W1017 13:09:17.223] E1017 13:09:16.976255   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:17.323] core.sh:870: Successful describe
I1017 13:09:17.323] Name:              redis-master
I1017 13:09:17.324] Namespace:         default
I1017 13:09:17.324] Labels:            app=redis
I1017 13:09:17.324]                    role=master
I1017 13:09:17.324]                    tier=backend
... skipping 5 lines ...
I1017 13:09:17.325] TargetPort:        6379/TCP
I1017 13:09:17.325] Endpoints:         <none>
I1017 13:09:17.325] Session Affinity:  None
I1017 13:09:17.325] Events:            <none>
I1017 13:09:17.325] (B
I1017 13:09:17.350] 
I1017 13:09:17.351] FAIL!
I1017 13:09:17.351] Describe services
I1017 13:09:17.351]   Expected Match: Name:
I1017 13:09:17.352]   Not found in:
I1017 13:09:17.352] Name:              kubernetes
I1017 13:09:17.352] Namespace:         default
I1017 13:09:17.352] Labels:            component=apiserver
... skipping 155 lines ...
I1017 13:09:18.017]     role: padawan
I1017 13:09:18.017]   sessionAffinity: None
I1017 13:09:18.017]   type: ClusterIP
I1017 13:09:18.017] status:
I1017 13:09:18.017]   loadBalancer: {}
I1017 13:09:18.106] service/redis-master selector updated
W1017 13:09:18.206] E1017 13:09:17.491574   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:18.207] E1017 13:09:17.629941   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:18.207] E1017 13:09:17.789931   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:18.207] E1017 13:09:17.977815   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:18.307] core.sh:890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan:
I1017 13:09:18.308] (Bservice/redis-master selector updated
I1017 13:09:18.423] core.sh:894: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1017 13:09:18.509] (BapiVersion: v1
I1017 13:09:18.509] kind: Service
I1017 13:09:18.509] metadata:
... skipping 17 lines ...
I1017 13:09:18.511]     role: padawan
I1017 13:09:18.511]   sessionAffinity: None
I1017 13:09:18.511]   type: ClusterIP
I1017 13:09:18.511] status:
I1017 13:09:18.511]   loadBalancer: {}
W1017 13:09:18.611] I1017 13:09:18.439959   53264 namespace_controller.go:185] Namespace has been deleted test-jobs
W1017 13:09:18.612] E1017 13:09:18.492951   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:18.612] error: you must specify resources by --filename when --local is set.
W1017 13:09:18.612] Example resource specifications include:
W1017 13:09:18.612]    '-f rsrc.yaml'
W1017 13:09:18.612]    '--filename=rsrc.json'
W1017 13:09:18.632] E1017 13:09:18.631367   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:18.732] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1017 13:09:18.880] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1017 13:09:18.967] (Bservice "redis-master" deleted
I1017 13:09:19.069] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 13:09:19.169] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 13:09:19.346] (Bservice/redis-master created
W1017 13:09:19.447] E1017 13:09:18.791340   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:19.447] E1017 13:09:18.979118   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:19.495] E1017 13:09:19.494476   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:19.595] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1017 13:09:19.596] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1017 13:09:19.731] (Bservice/service-v1-test created
W1017 13:09:19.832] E1017 13:09:19.632555   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:19.832] E1017 13:09:19.792966   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:19.932] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I1017 13:09:20.051] (Bservice/service-v1-test replaced
W1017 13:09:20.151] E1017 13:09:19.980576   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:20.252] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I1017 13:09:20.273] (Bservice "redis-master" deleted
I1017 13:09:20.372] service "service-v1-test" deleted
I1017 13:09:20.483] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 13:09:20.583] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 13:09:20.766] (Bservice/redis-master created
W1017 13:09:20.867] E1017 13:09:20.495855   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:20.868] E1017 13:09:20.633930   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:20.868] E1017 13:09:20.794390   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:20.968] service/redis-slave created
W1017 13:09:21.069] E1017 13:09:20.981891   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:21.170] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I1017 13:09:21.184] (BSuccessful
I1017 13:09:21.185] message:NAME           RSRC
I1017 13:09:21.185] kubernetes     145
I1017 13:09:21.185] redis-master   1494
I1017 13:09:21.185] redis-slave    1497
I1017 13:09:21.186] has:redis-master
I1017 13:09:21.286] core.sh:979: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I1017 13:09:21.383] (Bservice "redis-master" deleted
I1017 13:09:21.390] service "redis-slave" deleted
I1017 13:09:21.502] core.sh:986: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 13:09:21.602] (Bcore.sh:990: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 13:09:21.684] (Bservice/beep-boop created
W1017 13:09:21.785] E1017 13:09:21.497511   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:21.785] E1017 13:09:21.635781   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:21.797] E1017 13:09:21.796412   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:21.897] core.sh:994: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I1017 13:09:21.915] (Bcore.sh:998: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I1017 13:09:22.011] (Bservice "beep-boop" deleted
I1017 13:09:22.115] core.sh:1005: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1017 13:09:22.220] (Bcore.sh:1009: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:22.334] (Bservice/testmetadata created
I1017 13:09:22.334] deployment.apps/testmetadata created
W1017 13:09:22.435] E1017 13:09:21.983451   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:22.435] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1017 13:09:22.436] I1017 13:09:22.315991   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"5dbf0ad8-2b6e-4cae-b28f-ab8abb8c6a22", APIVersion:"apps/v1", ResourceVersion:"1509", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
W1017 13:09:22.436] I1017 13:09:22.323386   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"87b19f82-d513-4529-90a1-2fbb21b61b81", APIVersion:"apps/v1", ResourceVersion:"1510", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-f7pmd
W1017 13:09:22.436] I1017 13:09:22.326600   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"87b19f82-d513-4529-90a1-2fbb21b61b81", APIVersion:"apps/v1", ResourceVersion:"1510", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-vrstm
W1017 13:09:22.499] E1017 13:09:22.498939   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:22.600] core.sh:1013: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
I1017 13:09:22.600] (Bcore.sh:1014: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
I1017 13:09:22.645] (Bservice/exposemetadata exposed
W1017 13:09:22.746] E1017 13:09:22.637606   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:22.798] E1017 13:09:22.797922   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:22.899] core.sh:1020: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
I1017 13:09:22.899] (Bservice "exposemetadata" deleted
I1017 13:09:22.899] service "testmetadata" deleted
I1017 13:09:22.967] deployment.apps "testmetadata" deleted
I1017 13:09:22.993] +++ exit code: 0
I1017 13:09:23.030] Recording: run_daemonset_tests
... skipping 5 lines ...
I1017 13:09:23.078] +++ [1017 13:09:23] Creating namespace namespace-1571317763-24585
I1017 13:09:23.160] namespace/namespace-1571317763-24585 created
I1017 13:09:23.236] Context "test" modified.
I1017 13:09:23.244] +++ [1017 13:09:23] Testing kubectl(v1:daemonsets)
I1017 13:09:23.345] apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:23.550] (Bdaemonset.apps/bind created
W1017 13:09:23.651] E1017 13:09:22.984458   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:23.651] E1017 13:09:23.500455   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:23.652] I1017 13:09:23.547826   49669 controller.go:606] quota admission added evaluator for: daemonsets.apps
W1017 13:09:23.652] I1017 13:09:23.557177   49669 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W1017 13:09:23.653] E1017 13:09:23.639010   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:23.753] apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
I1017 13:09:23.844] (Bdaemonset.apps/bind configured
W1017 13:09:23.945] E1017 13:09:23.799187   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:23.986] E1017 13:09:23.985820   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:24.086] apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
I1017 13:09:24.087] (Bdaemonset.apps/bind image updated
I1017 13:09:24.151] apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
I1017 13:09:24.249] (Bdaemonset.apps/bind env updated
I1017 13:09:24.360] apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
I1017 13:09:24.453] (Bdaemonset.apps/bind resource requirements updated
... skipping 11 lines ...
I1017 13:09:24.966] +++ [1017 13:09:24] Creating namespace namespace-1571317764-10725
I1017 13:09:25.065] namespace/namespace-1571317764-10725 created
I1017 13:09:25.144] Context "test" modified.
I1017 13:09:25.151] +++ [1017 13:09:25] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
I1017 13:09:25.248] apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:25.442] (Bdaemonset.apps/bind created
W1017 13:09:25.543] E1017 13:09:24.501668   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:25.543] E1017 13:09:24.641405   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:25.544] E1017 13:09:24.800595   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:25.544] E1017 13:09:24.988458   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:25.544] E1017 13:09:25.505550   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:25.643] E1017 13:09:25.643181   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:25.745] apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1571317764-10725"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I1017 13:09:25.746]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
I1017 13:09:25.746] (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
I1017 13:09:25.809] apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1017 13:09:25.938] (Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1017 13:09:26.170] (Bdaemonset.apps/bind configured
W1017 13:09:26.270] E1017 13:09:25.805282   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:26.271] E1017 13:09:25.989890   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:26.371] apps.sh:77: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1017 13:09:26.416] (Bapps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:09:26.527] (Bapps.sh:79: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1017 13:09:26.645] (Bapps.sh:80: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1571317764-10725"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I1017 13:09:26.646]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1571317764-10725"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:latest","name":"kubernetes-pause"},{"image":"k8s.gcr.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I1017 13:09:26.647]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
... skipping 9 lines ...
I1017 13:09:26.760]   Volumes:	<none>
I1017 13:09:26.760]  (dry run)
I1017 13:09:26.864] apps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1017 13:09:26.965] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:09:27.066] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1017 13:09:27.172] (Bdaemonset.apps/bind rolled back
W1017 13:09:27.273] E1017 13:09:26.507126   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:27.273] E1017 13:09:26.647785   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:27.274] E1017 13:09:26.807320   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:27.274] E1017 13:09:26.991395   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:27.278] E1017 13:09:27.194285   53264 daemon_controller.go:302] namespace-1571317764-10725/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1571317764-10725", SelfLink:"/apis/apps/v1/namespaces/namespace-1571317764-10725/daemonsets/bind", UID:"5016dc51-2904-4fac-bcc5-52fd791377bc", ResourceVersion:"1576", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63706914565, loc:(*time.Location)(0x7763040)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1571317764-10725\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001a24740), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f1b018), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002274480), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001a24760), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000a819e8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001f1b07c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
I1017 13:09:27.379] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1017 13:09:27.384] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1017 13:09:27.496] (BSuccessful
I1017 13:09:27.497] message:error: unable to find specified revision 1000000 in history
I1017 13:09:27.497] has:unable to find specified revision
I1017 13:09:27.593] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1017 13:09:27.690] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1017 13:09:27.806] (Bdaemonset.apps/bind rolled back
W1017 13:09:27.907] E1017 13:09:27.508411   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:27.908] E1017 13:09:27.649407   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:27.908] E1017 13:09:27.808404   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:27.912] E1017 13:09:27.819332   53264 daemon_controller.go:302] namespace-1571317764-10725/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1571317764-10725", SelfLink:"/apis/apps/v1/namespaces/namespace-1571317764-10725/daemonsets/bind", UID:"5016dc51-2904-4fac-bcc5-52fd791377bc", ResourceVersion:"1579", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63706914565, loc:(*time.Location)(0x7763040)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1571317764-10725\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00092d2c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001dc0fb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001dc9e00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc00092d440), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002006178)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001dc100c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W1017 13:09:27.993] E1017 13:09:27.993218   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:28.094] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1017 13:09:28.094] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:09:28.119] (Bapps.sh:99: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1017 13:09:28.201] (Bdaemonset.apps "bind" deleted
I1017 13:09:28.228] +++ exit code: 0
I1017 13:09:28.270] Recording: run_rc_tests
... skipping 5 lines ...
I1017 13:09:28.323] +++ [1017 13:09:28] Creating namespace namespace-1571317768-18783
I1017 13:09:28.414] namespace/namespace-1571317768-18783 created
I1017 13:09:28.499] Context "test" modified.
I1017 13:09:28.508] +++ [1017 13:09:28] Testing kubectl(v1:replicationcontrollers)
I1017 13:09:28.611] core.sh:1046: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:28.787] (Breplicationcontroller/frontend created
W1017 13:09:28.888] E1017 13:09:28.510007   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:28.889] E1017 13:09:28.650812   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:28.889] I1017 13:09:28.794342   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"ffd41e77-94fb-4db6-a550-f14ad46d618b", APIVersion:"v1", ResourceVersion:"1589", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-652f7
W1017 13:09:28.890] I1017 13:09:28.798012   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"ffd41e77-94fb-4db6-a550-f14ad46d618b", APIVersion:"v1", ResourceVersion:"1589", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ssd8t
W1017 13:09:28.890] I1017 13:09:28.798228   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"ffd41e77-94fb-4db6-a550-f14ad46d618b", APIVersion:"v1", ResourceVersion:"1589", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cq6m7
W1017 13:09:28.890] E1017 13:09:28.809600   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:28.990] replicationcontroller "frontend" deleted
I1017 13:09:29.010] core.sh:1051: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:29.109] (Bcore.sh:1055: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:29.280] (Breplicationcontroller/frontend created
W1017 13:09:29.381] E1017 13:09:28.994507   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:29.382] I1017 13:09:29.283300   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"1ed2a6b1-c727-4706-b9b3-011abbafd309", APIVersion:"v1", ResourceVersion:"1605", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pwfbd
W1017 13:09:29.382] I1017 13:09:29.286223   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"1ed2a6b1-c727-4706-b9b3-011abbafd309", APIVersion:"v1", ResourceVersion:"1605", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mxg5h
W1017 13:09:29.383] I1017 13:09:29.287083   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"1ed2a6b1-c727-4706-b9b3-011abbafd309", APIVersion:"v1", ResourceVersion:"1605", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-g4jcg
I1017 13:09:29.483] core.sh:1059: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1017 13:09:29.517] (B
I1017 13:09:29.523] core.sh:1061: FAIL!
I1017 13:09:29.523] Describe rc frontend
I1017 13:09:29.524]   Expected Match: Name:
I1017 13:09:29.524]   Not found in:
I1017 13:09:29.524] Name:         frontend
I1017 13:09:29.524] Namespace:    namespace-1571317768-18783
I1017 13:09:29.524] Selector:     app=guestbook,tier=frontend
I1017 13:09:29.524] Labels:       app=guestbook
I1017 13:09:29.524]               tier=frontend
I1017 13:09:29.524] Annotations:  <none>
I1017 13:09:29.525] Replicas:     3 current / 3 desired
I1017 13:09:29.525] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:09:29.525] Pod Template:
I1017 13:09:29.525]   Labels:  app=guestbook
I1017 13:09:29.525]            tier=frontend
I1017 13:09:29.525]   Containers:
I1017 13:09:29.525]    php-redis:
I1017 13:09:29.525]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1017 13:09:29.527]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-pwfbd
I1017 13:09:29.527]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-mxg5h
I1017 13:09:29.527]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-g4jcg
I1017 13:09:29.527] (B
I1017 13:09:29.527] 1061 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
I1017 13:09:29.527] (B
W1017 13:09:29.628] E1017 13:09:29.511205   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:29.652] E1017 13:09:29.652084   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:29.753] core.sh:1063: Successful describe
I1017 13:09:29.753] Name:         frontend
I1017 13:09:29.754] Namespace:    namespace-1571317768-18783
I1017 13:09:29.754] Selector:     app=guestbook,tier=frontend
I1017 13:09:29.754] Labels:       app=guestbook
I1017 13:09:29.754]               tier=frontend
I1017 13:09:29.754] Annotations:  <none>
I1017 13:09:29.754] Replicas:     3 current / 3 desired
I1017 13:09:29.754] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:09:29.754] Pod Template:
I1017 13:09:29.754]   Labels:  app=guestbook
I1017 13:09:29.754]            tier=frontend
I1017 13:09:29.754]   Containers:
I1017 13:09:29.755]    php-redis:
I1017 13:09:29.755]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I1017 13:09:29.772] Namespace:    namespace-1571317768-18783
I1017 13:09:29.773] Selector:     app=guestbook,tier=frontend
I1017 13:09:29.773] Labels:       app=guestbook
I1017 13:09:29.773]               tier=frontend
I1017 13:09:29.773] Annotations:  <none>
I1017 13:09:29.773] Replicas:     3 current / 3 desired
I1017 13:09:29.773] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:09:29.773] Pod Template:
I1017 13:09:29.773]   Labels:  app=guestbook
I1017 13:09:29.774]            tier=frontend
I1017 13:09:29.774]   Containers:
I1017 13:09:29.774]    php-redis:
I1017 13:09:29.774]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 4 lines ...
I1017 13:09:29.774]       memory:  100Mi
I1017 13:09:29.775]     Environment:
I1017 13:09:29.775]       GET_HOSTS_FROM:  dns
I1017 13:09:29.775]     Mounts:            <none>
I1017 13:09:29.775]   Volumes:             <none>
I1017 13:09:29.775] (B
W1017 13:09:29.876] E1017 13:09:29.810789   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:29.976] core.sh:1067: Successful describe
I1017 13:09:29.976] Name:         frontend
I1017 13:09:29.977] Namespace:    namespace-1571317768-18783
I1017 13:09:29.977] Selector:     app=guestbook,tier=frontend
I1017 13:09:29.977] Labels:       app=guestbook
I1017 13:09:29.977]               tier=frontend
I1017 13:09:29.977] Annotations:  <none>
I1017 13:09:29.977] Replicas:     3 current / 3 desired
I1017 13:09:29.977] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:09:29.978] Pod Template:
I1017 13:09:29.978]   Labels:  app=guestbook
I1017 13:09:29.978]            tier=frontend
I1017 13:09:29.978]   Containers:
I1017 13:09:29.978]    php-redis:
I1017 13:09:29.978]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1017 13:09:29.980]   ----    ------            ----  ----                    -------
I1017 13:09:29.980]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-pwfbd
I1017 13:09:29.980]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-mxg5h
I1017 13:09:29.981]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-g4jcg
I1017 13:09:29.981] (B
I1017 13:09:30.001] 
I1017 13:09:30.002] FAIL!
I1017 13:09:30.002] Describe rc
I1017 13:09:30.002]   Expected Match: Name:
I1017 13:09:30.002]   Not found in:
I1017 13:09:30.002] Name:         frontend
I1017 13:09:30.002] Namespace:    namespace-1571317768-18783
I1017 13:09:30.002] Selector:     app=guestbook,tier=frontend
I1017 13:09:30.002] Labels:       app=guestbook
I1017 13:09:30.003]               tier=frontend
I1017 13:09:30.003] Annotations:  <none>
I1017 13:09:30.003] Replicas:     3 current / 3 desired
I1017 13:09:30.003] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:09:30.003] Pod Template:
I1017 13:09:30.003]   Labels:  app=guestbook
I1017 13:09:30.003]            tier=frontend
I1017 13:09:30.003]   Containers:
I1017 13:09:30.004]    php-redis:
I1017 13:09:30.004]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1017 13:09:30.005]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-pwfbd
I1017 13:09:30.005]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-mxg5h
I1017 13:09:30.006]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-g4jcg
I1017 13:09:30.006] (B
I1017 13:09:30.006] 1069 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
I1017 13:09:30.006] (B
W1017 13:09:30.106] E1017 13:09:29.996273   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:30.207] Successful describe
I1017 13:09:30.207] Name:         frontend
I1017 13:09:30.207] Namespace:    namespace-1571317768-18783
I1017 13:09:30.208] Selector:     app=guestbook,tier=frontend
I1017 13:09:30.208] Labels:       app=guestbook
I1017 13:09:30.208]               tier=frontend
I1017 13:09:30.208] Annotations:  <none>
I1017 13:09:30.208] Replicas:     3 current / 3 desired
I1017 13:09:30.208] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:09:30.208] Pod Template:
I1017 13:09:30.209]   Labels:  app=guestbook
I1017 13:09:30.209]            tier=frontend
I1017 13:09:30.209]   Containers:
I1017 13:09:30.209]    php-redis:
I1017 13:09:30.209]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1017 13:09:30.227] Namespace:    namespace-1571317768-18783
I1017 13:09:30.227] Selector:     app=guestbook,tier=frontend
I1017 13:09:30.227] Labels:       app=guestbook
I1017 13:09:30.227]               tier=frontend
I1017 13:09:30.227] Annotations:  <none>
I1017 13:09:30.227] Replicas:     3 current / 3 desired
I1017 13:09:30.228] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:09:30.228] Pod Template:
I1017 13:09:30.228]   Labels:  app=guestbook
I1017 13:09:30.228]            tier=frontend
I1017 13:09:30.228]   Containers:
I1017 13:09:30.228]    php-redis:
I1017 13:09:30.228]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1017 13:09:30.338] Namespace:    namespace-1571317768-18783
I1017 13:09:30.338] Selector:     app=guestbook,tier=frontend
I1017 13:09:30.338] Labels:       app=guestbook
I1017 13:09:30.338]               tier=frontend
I1017 13:09:30.338] Annotations:  <none>
I1017 13:09:30.338] Replicas:     3 current / 3 desired
I1017 13:09:30.339] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1017 13:09:30.339] Pod Template:
I1017 13:09:30.340]   Labels:  app=guestbook
I1017 13:09:30.340]            tier=frontend
I1017 13:09:30.340]   Containers:
I1017 13:09:30.340]    php-redis:
I1017 13:09:30.340]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 19 lines ...
I1017 13:09:30.923] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 2
I1017 13:09:31.029] (Bcore.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I1017 13:09:31.116] (Breplicationcontroller/frontend scaled
I1017 13:09:31.218] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I1017 13:09:31.320] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I1017 13:09:31.409] (Breplicationcontroller/frontend scaled
W1017 13:09:31.512] E1017 13:09:30.512908   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:31.512] I1017 13:09:30.533084   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"1ed2a6b1-c727-4706-b9b3-011abbafd309", APIVersion:"v1", ResourceVersion:"1615", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-pwfbd
W1017 13:09:31.512] E1017 13:09:30.653584   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:31.513] E1017 13:09:30.811903   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:31.513] error: Expected replicas to be 3, was 2
W1017 13:09:31.513] E1017 13:09:30.997798   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:31.513] I1017 13:09:31.117880   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"1ed2a6b1-c727-4706-b9b3-011abbafd309", APIVersion:"v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-krpq6
W1017 13:09:31.514] I1017 13:09:31.413940   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"1ed2a6b1-c727-4706-b9b3-011abbafd309", APIVersion:"v1", ResourceVersion:"1626", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-krpq6
W1017 13:09:31.515] E1017 13:09:31.514453   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:31.615] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I1017 13:09:31.615] (Breplicationcontroller "frontend" deleted
W1017 13:09:31.716] E1017 13:09:31.654919   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:31.795] I1017 13:09:31.795164   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"redis-master", UID:"60eb8ddf-d256-4955-9e26-1c7da5065e8c", APIVersion:"v1", ResourceVersion:"1637", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-7c5pn
W1017 13:09:31.814] E1017 13:09:31.813597   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:31.914] replicationcontroller/redis-master created
I1017 13:09:32.000] replicationcontroller/redis-slave created
W1017 13:09:32.101] E1017 13:09:31.999093   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:32.102] I1017 13:09:32.004176   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"redis-slave", UID:"acecbf60-7d2e-4a7e-8b72-67c634254a14", APIVersion:"v1", ResourceVersion:"1642", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-ww6v5
W1017 13:09:32.102] I1017 13:09:32.007526   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"redis-slave", UID:"acecbf60-7d2e-4a7e-8b72-67c634254a14", APIVersion:"v1", ResourceVersion:"1642", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-7hqfm
W1017 13:09:32.109] I1017 13:09:32.108492   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"redis-master", UID:"60eb8ddf-d256-4955-9e26-1c7da5065e8c", APIVersion:"v1", ResourceVersion:"1649", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-wtqk9
W1017 13:09:32.112] I1017 13:09:32.112000   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"redis-master", UID:"60eb8ddf-d256-4955-9e26-1c7da5065e8c", APIVersion:"v1", ResourceVersion:"1649", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-snprz
W1017 13:09:32.113] I1017 13:09:32.112789   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"redis-master", UID:"60eb8ddf-d256-4955-9e26-1c7da5065e8c", APIVersion:"v1", ResourceVersion:"1649", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-mqq4t
W1017 13:09:32.114] I1017 13:09:32.114109   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"redis-slave", UID:"acecbf60-7d2e-4a7e-8b72-67c634254a14", APIVersion:"v1", ResourceVersion:"1651", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-jdbfr
W1017 13:09:32.116] I1017 13:09:32.115760   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"redis-slave", UID:"acecbf60-7d2e-4a7e-8b72-67c634254a14", APIVersion:"v1", ResourceVersion:"1651", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-j5p8j
I1017 13:09:32.216] replicationcontroller/redis-master scaled
I1017 13:09:32.217] replicationcontroller/redis-slave scaled
I1017 13:09:32.217] core.sh:1117: Successful get rc redis-master {{.spec.replicas}}: 4
I1017 13:09:32.321] (Bcore.sh:1118: Successful get rc redis-slave {{.spec.replicas}}: 4
I1017 13:09:32.404] (Breplicationcontroller "redis-master" deleted
I1017 13:09:32.410] replicationcontroller "redis-slave" deleted
W1017 13:09:32.516] E1017 13:09:32.516063   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:32.601] I1017 13:09:32.600297   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment", UID:"5110e29e-85ac-47e7-bb43-da6ff8ea79be", APIVersion:"apps/v1", ResourceVersion:"1686", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1017 13:09:32.605] I1017 13:09:32.604750   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-6986c7bc94", UID:"bb271f5a-c605-4e77-90ff-60fb4adaa47e", APIVersion:"apps/v1", ResourceVersion:"1687", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-7dpct
W1017 13:09:32.608] I1017 13:09:32.607634   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-6986c7bc94", UID:"bb271f5a-c605-4e77-90ff-60fb4adaa47e", APIVersion:"apps/v1", ResourceVersion:"1687", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-jrl9t
W1017 13:09:32.609] I1017 13:09:32.608817   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-6986c7bc94", UID:"bb271f5a-c605-4e77-90ff-60fb4adaa47e", APIVersion:"apps/v1", ResourceVersion:"1687", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-cp87v
W1017 13:09:32.657] E1017 13:09:32.656588   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:32.707] I1017 13:09:32.706501   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment", UID:"5110e29e-85ac-47e7-bb43-da6ff8ea79be", APIVersion:"apps/v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
W1017 13:09:32.712] I1017 13:09:32.711479   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-6986c7bc94", UID:"bb271f5a-c605-4e77-90ff-60fb4adaa47e", APIVersion:"apps/v1", ResourceVersion:"1701", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-cp87v
W1017 13:09:32.714] I1017 13:09:32.713384   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-6986c7bc94", UID:"bb271f5a-c605-4e77-90ff-60fb4adaa47e", APIVersion:"apps/v1", ResourceVersion:"1701", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-jrl9t
I1017 13:09:32.814] deployment.apps/nginx-deployment created
I1017 13:09:32.815] deployment.apps/nginx-deployment scaled
I1017 13:09:32.815] core.sh:1127: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
I1017 13:09:32.888] (Bdeployment.apps "nginx-deployment" deleted
W1017 13:09:32.989] E1017 13:09:32.814885   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:33.000] E1017 13:09:33.000334   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:33.101] Successful
I1017 13:09:33.102] message:service/expose-test-deployment exposed
I1017 13:09:33.102] has:service/expose-test-deployment exposed
I1017 13:09:33.103] service "expose-test-deployment" deleted
I1017 13:09:33.182] Successful
I1017 13:09:33.182] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1017 13:09:33.182] See 'kubectl expose -h' for help and examples
I1017 13:09:33.182] has:invalid deployment: no selectors
I1017 13:09:33.370] deployment.apps/nginx-deployment created
W1017 13:09:33.471] I1017 13:09:33.373682   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment", UID:"35d81740-005d-4c00-b944-08a8f66541ca", APIVersion:"apps/v1", ResourceVersion:"1724", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1017 13:09:33.472] I1017 13:09:33.377706   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-6986c7bc94", UID:"ae4cb836-a133-4fc1-a8af-1034804e12a6", APIVersion:"apps/v1", ResourceVersion:"1725", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-w5td9
W1017 13:09:33.472] I1017 13:09:33.380023   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-6986c7bc94", UID:"ae4cb836-a133-4fc1-a8af-1034804e12a6", APIVersion:"apps/v1", ResourceVersion:"1725", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-gdds7
W1017 13:09:33.472] I1017 13:09:33.381019   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-6986c7bc94", UID:"ae4cb836-a133-4fc1-a8af-1034804e12a6", APIVersion:"apps/v1", ResourceVersion:"1725", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-lkmrn
W1017 13:09:33.518] E1017 13:09:33.517905   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:33.619] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I1017 13:09:33.619] (Bservice/nginx-deployment exposed
I1017 13:09:33.695] core.sh:1150: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
I1017 13:09:33.779] (Bdeployment.apps "nginx-deployment" deleted
I1017 13:09:33.789] service "nginx-deployment" deleted
W1017 13:09:33.890] E1017 13:09:33.658003   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:33.891] E1017 13:09:33.816194   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:33.972] I1017 13:09:33.971886   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"947f9f44-bdc0-4b8d-abee-4044f23672ae", APIVersion:"v1", ResourceVersion:"1753", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8kv69
W1017 13:09:33.975] I1017 13:09:33.974952   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"947f9f44-bdc0-4b8d-abee-4044f23672ae", APIVersion:"v1", ResourceVersion:"1753", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-h5lv8
W1017 13:09:33.976] I1017 13:09:33.975174   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"947f9f44-bdc0-4b8d-abee-4044f23672ae", APIVersion:"v1", ResourceVersion:"1753", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-25tpc
W1017 13:09:34.002] E1017 13:09:34.001876   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:34.103] replicationcontroller/frontend created
I1017 13:09:34.104] core.sh:1157: Successful get rc frontend {{.spec.replicas}}: 3
I1017 13:09:34.186] (Bservice/frontend exposed
I1017 13:09:34.295] core.sh:1161: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1017 13:09:34.396] (Bservice/frontend-2 exposed
I1017 13:09:34.504] core.sh:1165: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
I1017 13:09:34.689] (Bpod/valid-pod created
W1017 13:09:34.790] E1017 13:09:34.519168   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:34.790] E1017 13:09:34.659473   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:34.817] E1017 13:09:34.817291   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:34.918] service/frontend-3 exposed
I1017 13:09:34.931] core.sh:1170: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 444
I1017 13:09:35.043] (Bservice/frontend-4 exposed
W1017 13:09:35.143] E1017 13:09:35.003141   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:35.244] core.sh:1174: Successful get service frontend-4 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
I1017 13:09:35.279] (Bservice/frontend-5 exposed
I1017 13:09:35.392] core.sh:1178: Successful get service frontend-5 {{(index .spec.ports 0).port}}: 80
I1017 13:09:35.495] (Bpod "valid-pod" deleted
W1017 13:09:35.596] E1017 13:09:35.520543   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:35.661] E1017 13:09:35.661009   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:35.762] service "frontend" deleted
I1017 13:09:35.762] service "frontend-2" deleted
I1017 13:09:35.763] service "frontend-3" deleted
I1017 13:09:35.763] service "frontend-4" deleted
I1017 13:09:35.763] service "frontend-5" deleted
I1017 13:09:35.763] Successful
I1017 13:09:35.763] message:error: cannot expose a Node
I1017 13:09:35.763] has:cannot expose
I1017 13:09:35.851] Successful
I1017 13:09:35.852] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I1017 13:09:35.852] has:metadata.name: Invalid value
W1017 13:09:35.953] E1017 13:09:35.818606   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:36.005] E1017 13:09:36.004541   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:36.105] Successful
I1017 13:09:36.106] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I1017 13:09:36.106] has:kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I1017 13:09:36.106] service "kubernetes-serve-hostname-testing-sixty-three-characters-in-len" deleted
I1017 13:09:36.184] Successful
I1017 13:09:36.184] message:service/etcd-server exposed
I1017 13:09:36.184] has:etcd-server exposed
I1017 13:09:36.290] core.sh:1208: Successful get service etcd-server {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: port-1 2380
I1017 13:09:36.394] (Bcore.sh:1209: Successful get service etcd-server {{(index .spec.ports 1).name}} {{(index .spec.ports 1).port}}: port-2 2379
I1017 13:09:36.493] (Bservice "etcd-server" deleted
W1017 13:09:36.593] E1017 13:09:36.521988   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:36.663] E1017 13:09:36.662930   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:36.764] core.sh:1215: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1017 13:09:36.765] (Breplicationcontroller "frontend" deleted
I1017 13:09:36.830] core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:36.939] (Bcore.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:37.113] (Breplicationcontroller/frontend created
W1017 13:09:37.214] E1017 13:09:36.819795   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:37.215] E1017 13:09:37.006227   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:37.215] I1017 13:09:37.117374   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"fdcdd472-416c-4cb1-a81f-d018a6184217", APIVersion:"v1", ResourceVersion:"1816", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ncswk
W1017 13:09:37.215] I1017 13:09:37.120392   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"fdcdd472-416c-4cb1-a81f-d018a6184217", APIVersion:"v1", ResourceVersion:"1816", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bw2lm
W1017 13:09:37.215] I1017 13:09:37.120429   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"fdcdd472-416c-4cb1-a81f-d018a6184217", APIVersion:"v1", ResourceVersion:"1816", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-b2jfk
I1017 13:09:37.322] replicationcontroller/redis-slave created
W1017 13:09:37.424] I1017 13:09:37.325870   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"redis-slave", UID:"064aea5d-507c-429f-83af-264574203ea0", APIVersion:"v1", ResourceVersion:"1825", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-qlpmm
W1017 13:09:37.424] I1017 13:09:37.329824   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"redis-slave", UID:"064aea5d-507c-429f-83af-264574203ea0", APIVersion:"v1", ResourceVersion:"1825", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-lmnch
W1017 13:09:37.524] E1017 13:09:37.523845   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:37.625] core.sh:1228: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I1017 13:09:37.626] (Bcore.sh:1232: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I1017 13:09:37.639] (Breplicationcontroller "frontend" deleted
I1017 13:09:37.643] replicationcontroller "redis-slave" deleted
W1017 13:09:37.744] E1017 13:09:37.664229   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:37.821] E1017 13:09:37.821267   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:37.922] core.sh:1236: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:37.922] (Bcore.sh:1240: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:38.029] (Breplicationcontroller/frontend created
W1017 13:09:38.130] E1017 13:09:38.007522   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:38.130] I1017 13:09:38.032355   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"5ff387a6-0c8b-4b68-adad-a08a24aca1c0", APIVersion:"v1", ResourceVersion:"1844", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zrj7b
W1017 13:09:38.131] I1017 13:09:38.035323   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"5ff387a6-0c8b-4b68-adad-a08a24aca1c0", APIVersion:"v1", ResourceVersion:"1844", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7phsk
W1017 13:09:38.131] I1017 13:09:38.035701   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571317768-18783", Name:"frontend", UID:"5ff387a6-0c8b-4b68-adad-a08a24aca1c0", APIVersion:"v1", ResourceVersion:"1844", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xgfdz
I1017 13:09:38.231] core.sh:1243: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1017 13:09:38.236] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I1017 13:09:38.369] core.sh:1246: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I1017 13:09:38.472] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W1017 13:09:38.573] E1017 13:09:38.525429   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:38.666] E1017 13:09:38.665766   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:38.766] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1017 13:09:38.767] core.sh:1250: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1017 13:09:38.792] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W1017 13:09:38.893] E1017 13:09:38.822574   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:38.893] Error: required flag(s) "max" not set
W1017 13:09:38.893] 
W1017 13:09:38.893] 
W1017 13:09:38.893] Examples:
W1017 13:09:38.893]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1017 13:09:38.894]   kubectl autoscale deployment foo --min=2 --max=10
W1017 13:09:38.894]   
... skipping 54 lines ...
I1017 13:09:39.176]           limits:
I1017 13:09:39.177]             cpu: 300m
I1017 13:09:39.177]           requests:
I1017 13:09:39.177]             cpu: 300m
I1017 13:09:39.177]       terminationGracePeriodSeconds: 0
I1017 13:09:39.177] status: {}
W1017 13:09:39.278] E1017 13:09:39.008885   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:39.278] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I1017 13:09:39.443] deployment.apps/nginx-deployment-resources created
W1017 13:09:39.544] I1017 13:09:39.445918   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources", UID:"79ed2ad6-5df6-4989-a141-cb6bff75f37f", APIVersion:"apps/v1", ResourceVersion:"1865", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-67f8cfff5 to 3
W1017 13:09:39.544] I1017 13:09:39.450091   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources-67f8cfff5", UID:"fe2a3975-3161-49e0-bc03-1b1680013490", APIVersion:"apps/v1", ResourceVersion:"1866", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-c8tzd
W1017 13:09:39.545] I1017 13:09:39.452810   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources-67f8cfff5", UID:"fe2a3975-3161-49e0-bc03-1b1680013490", APIVersion:"apps/v1", ResourceVersion:"1866", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-d8sqs
W1017 13:09:39.545] I1017 13:09:39.455194   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources-67f8cfff5", UID:"fe2a3975-3161-49e0-bc03-1b1680013490", APIVersion:"apps/v1", ResourceVersion:"1866", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-ftgjf
W1017 13:09:39.546] E1017 13:09:39.526967   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:39.646] core.sh:1265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I1017 13:09:39.663] (Bcore.sh:1266: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:09:39.759] (Bcore.sh:1267: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I1017 13:09:39.863] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W1017 13:09:39.963] E1017 13:09:39.667275   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:39.964] E1017 13:09:39.824001   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:39.965] I1017 13:09:39.866942   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources", UID:"79ed2ad6-5df6-4989-a141-cb6bff75f37f", APIVersion:"apps/v1", ResourceVersion:"1879", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-55c547f795 to 1
W1017 13:09:39.965] I1017 13:09:39.870247   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources-55c547f795", UID:"1f6064e0-d469-47da-a874-617b19875aee", APIVersion:"apps/v1", ResourceVersion:"1880", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-55c547f795-j5ms6
W1017 13:09:40.010] E1017 13:09:40.010276   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:40.111] core.sh:1270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I1017 13:09:40.112] (Bcore.sh:1271: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I1017 13:09:40.274] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W1017 13:09:40.375] error: unable to find container named redis
W1017 13:09:40.376] I1017 13:09:40.286259   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources", UID:"79ed2ad6-5df6-4989-a141-cb6bff75f37f", APIVersion:"apps/v1", ResourceVersion:"1889", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-55c547f795 to 0
W1017 13:09:40.376] I1017 13:09:40.294462   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources", UID:"79ed2ad6-5df6-4989-a141-cb6bff75f37f", APIVersion:"apps/v1", ResourceVersion:"1891", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6d86564b45 to 1
W1017 13:09:40.377] I1017 13:09:40.299861   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources-55c547f795", UID:"1f6064e0-d469-47da-a874-617b19875aee", APIVersion:"apps/v1", ResourceVersion:"1893", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-55c547f795-j5ms6
W1017 13:09:40.377] I1017 13:09:40.300790   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources-6d86564b45", UID:"f251ea4f-5412-43f5-a83e-e41fb00f9826", APIVersion:"apps/v1", ResourceVersion:"1896", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6d86564b45-m2phf
I1017 13:09:40.477] core.sh:1276: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1017 13:09:40.489] (Bcore.sh:1277: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I1017 13:09:40.586] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W1017 13:09:40.687] E1017 13:09:40.529024   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:40.687] I1017 13:09:40.598247   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources", UID:"79ed2ad6-5df6-4989-a141-cb6bff75f37f", APIVersion:"apps/v1", ResourceVersion:"1911", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 2
W1017 13:09:40.688] I1017 13:09:40.603379   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources-67f8cfff5", UID:"fe2a3975-3161-49e0-bc03-1b1680013490", APIVersion:"apps/v1", ResourceVersion:"1915", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-c8tzd
W1017 13:09:40.688] I1017 13:09:40.606974   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources", UID:"79ed2ad6-5df6-4989-a141-cb6bff75f37f", APIVersion:"apps/v1", ResourceVersion:"1913", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c478d4fdb to 1
W1017 13:09:40.688] I1017 13:09:40.614131   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317768-18783", Name:"nginx-deployment-resources-6c478d4fdb", UID:"6540ed0b-b174-4fdb-b04b-c53e4aca977d", APIVersion:"apps/v1", ResourceVersion:"1920", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c478d4fdb-bjrlw
W1017 13:09:40.689] E1017 13:09:40.669037   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:40.789] core.sh:1280: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1017 13:09:40.819] (Bcore.sh:1281: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1017 13:09:40.926] (Bcore.sh:1282: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
I1017 13:09:41.020] (BapiVersion: apps/v1
I1017 13:09:41.021] kind: Deployment
I1017 13:09:41.021] metadata:
... skipping 68 lines ...
I1017 13:09:41.040]     status: "True"
I1017 13:09:41.040]     type: Progressing
I1017 13:09:41.041]   observedGeneration: 4
I1017 13:09:41.041]   replicas: 4
I1017 13:09:41.041]   unavailableReplicas: 4
I1017 13:09:41.041]   updatedReplicas: 1
W1017 13:09:41.142] E1017 13:09:40.825204   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:41.143] E1017 13:09:41.011651   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:41.143] error: you must specify resources by --filename when --local is set.
W1017 13:09:41.144] Example resource specifications include:
W1017 13:09:41.144]    '-f rsrc.yaml'
W1017 13:09:41.144]    '--filename=rsrc.json'
I1017 13:09:41.245] core.sh:1286: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1017 13:09:41.333] (Bcore.sh:1287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1017 13:09:41.447] (Bcore.sh:1288: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 7 lines ...
I1017 13:09:41.650] +++ command: run_deployment_tests
I1017 13:09:41.665] +++ [1017 13:09:41] Creating namespace namespace-1571317781-3503
I1017 13:09:41.751] namespace/namespace-1571317781-3503 created
I1017 13:09:41.826] Context "test" modified.
I1017 13:09:41.834] +++ [1017 13:09:41] Testing deployments
I1017 13:09:41.915] deployment.apps/test-nginx-extensions created
W1017 13:09:42.016] E1017 13:09:41.530444   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:42.016] E1017 13:09:41.670823   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:42.016] E1017 13:09:41.826643   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:42.017] I1017 13:09:41.918636   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317781-3503", Name:"test-nginx-extensions", UID:"4f239b35-4bc1-4b1b-81ef-0b59f4d4b900", APIVersion:"apps/v1", ResourceVersion:"1947", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-5559c76db7 to 1
W1017 13:09:42.017] I1017 13:09:41.924534   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"test-nginx-extensions-5559c76db7", UID:"25a055e1-a691-413f-83c4-a53b897e33ae", APIVersion:"apps/v1", ResourceVersion:"1948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-5559c76db7-xhrwt
W1017 13:09:42.018] E1017 13:09:42.012683   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:42.118] apps.sh:185: Successful get deploy test-nginx-extensions {{(index .spec.template.spec.containers 0).name}}: nginx
I1017 13:09:42.119] (BSuccessful
I1017 13:09:42.119] message:10
I1017 13:09:42.119] has not:2
I1017 13:09:42.211] Successful
I1017 13:09:42.211] message:apps/v1
I1017 13:09:42.211] has:apps/v1
I1017 13:09:42.298] deployment.apps "test-nginx-extensions" deleted
I1017 13:09:42.397] deployment.apps/test-nginx-apps created
W1017 13:09:42.498] I1017 13:09:42.401231   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317781-3503", Name:"test-nginx-apps", UID:"4aa54910-0d54-4378-96b9-6a7e48778ff7", APIVersion:"apps/v1", ResourceVersion:"1961", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-apps-79b9bd9585 to 1
W1017 13:09:42.499] I1017 13:09:42.407006   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"test-nginx-apps-79b9bd9585", UID:"900887ab-8da9-482e-9d7a-f12054214eab", APIVersion:"apps/v1", ResourceVersion:"1962", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-apps-79b9bd9585-2jp48
W1017 13:09:42.532] E1017 13:09:42.531934   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:42.633] apps.sh:198: Successful get deploy test-nginx-apps {{(index .spec.template.spec.containers 0).name}}: nginx
I1017 13:09:42.633] (BSuccessful
I1017 13:09:42.633] message:10
I1017 13:09:42.633] has:10
I1017 13:09:42.680] Successful
I1017 13:09:42.681] message:apps/v1
I1017 13:09:42.681] has:apps/v1
W1017 13:09:42.781] E1017 13:09:42.671857   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:42.828] E1017 13:09:42.828080   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:42.929] 
I1017 13:09:42.929] FAIL!
I1017 13:09:42.929] Describe rs
I1017 13:09:42.930]   Expected Match: Name:
I1017 13:09:42.930]   Not found in:
I1017 13:09:42.930] Name:           test-nginx-apps-79b9bd9585
I1017 13:09:42.930] Namespace:      namespace-1571317781-3503
I1017 13:09:42.930] Selector:       app=test-nginx-apps,pod-template-hash=79b9bd9585
I1017 13:09:42.930] Labels:         app=test-nginx-apps
I1017 13:09:42.930]                 pod-template-hash=79b9bd9585
I1017 13:09:42.931] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I1017 13:09:42.931]                 deployment.kubernetes.io/max-replicas: 2
I1017 13:09:42.931]                 deployment.kubernetes.io/revision: 1
I1017 13:09:42.931] Controlled By:  Deployment/test-nginx-apps
I1017 13:09:42.931] Replicas:       1 current / 1 desired
I1017 13:09:42.931] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1017 13:09:42.932] Pod Template:
I1017 13:09:42.932]   Labels:  app=test-nginx-apps
I1017 13:09:42.932]            pod-template-hash=79b9bd9585
I1017 13:09:42.932]   Containers:
I1017 13:09:42.932]    nginx:
I1017 13:09:42.932]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 7 lines ...
I1017 13:09:42.934]   ----    ------            ----  ----                   -------
I1017 13:09:42.934]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: test-nginx-apps-79b9bd9585-2jp48
I1017 13:09:42.934] (B
I1017 13:09:42.934] 206 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/apps.sh
I1017 13:09:42.934] (B
I1017 13:09:42.934] 
I1017 13:09:42.934] FAIL!
I1017 13:09:42.934] Describe pods
I1017 13:09:42.935]   Expected Match: Name:
I1017 13:09:42.935]   Not found in:
I1017 13:09:42.935] Name:           test-nginx-apps-79b9bd9585-2jp48
I1017 13:09:42.935] Namespace:      namespace-1571317781-3503
I1017 13:09:42.935] Priority:       0
... skipping 20 lines ...
I1017 13:09:42.937] (B
I1017 13:09:42.938] 208 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/apps.sh
I1017 13:09:42.938] (B
I1017 13:09:42.989] deployment.apps "test-nginx-apps" deleted
I1017 13:09:43.090] apps.sh:214: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:43.171] (Bdeployment.apps/nginx-with-command created
W1017 13:09:43.272] E1017 13:09:43.013994   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:43.273] I1017 13:09:43.173851   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317781-3503", Name:"nginx-with-command", UID:"6b3cb0e6-f091-4f9e-8c5f-76a4b8086fe7", APIVersion:"apps/v1", ResourceVersion:"1976", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-with-command-757c6f58dd to 1
W1017 13:09:43.273] I1017 13:09:43.176807   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"nginx-with-command-757c6f58dd", UID:"577b860a-81ad-480d-9af2-81e0a122c0e8", APIVersion:"apps/v1", ResourceVersion:"1977", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-with-command-757c6f58dd-tcdxz
I1017 13:09:43.374] apps.sh:218: Successful get deploy nginx-with-command {{(index .spec.template.spec.containers 0).name}}: nginx
I1017 13:09:43.374] (Bdeployment.apps "nginx-with-command" deleted
I1017 13:09:43.482] apps.sh:224: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:43.660] (Bdeployment.apps/deployment-with-unixuserid created
W1017 13:09:43.761] E1017 13:09:43.533374   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:43.761] I1017 13:09:43.663240   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317781-3503", Name:"deployment-with-unixuserid", UID:"b0dac660-ff59-491e-812f-34d963790e7f", APIVersion:"apps/v1", ResourceVersion:"1990", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-with-unixuserid-8fcdfc94f to 1
W1017 13:09:43.762] I1017 13:09:43.667417   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"deployment-with-unixuserid-8fcdfc94f", UID:"085c9ba2-0588-428c-ba2d-5e4ce223ec71", APIVersion:"apps/v1", ResourceVersion:"1991", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-with-unixuserid-8fcdfc94f-s2tf7
W1017 13:09:43.762] E1017 13:09:43.672937   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:43.830] E1017 13:09:43.829406   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:43.930] apps.sh:228: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: deployment-with-unixuserid:
I1017 13:09:43.930] (Bdeployment.apps "deployment-with-unixuserid" deleted
I1017 13:09:43.967] apps.sh:235: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:44.135] (Bdeployment.apps/nginx-deployment created
W1017 13:09:44.236] E1017 13:09:44.015363   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:44.237] I1017 13:09:44.138673   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317781-3503", Name:"nginx-deployment", UID:"2f783444-c7f9-4229-9c56-8d412bc64eda", APIVersion:"apps/v1", ResourceVersion:"2005", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1017 13:09:44.238] I1017 13:09:44.142565   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"nginx-deployment-6986c7bc94", UID:"2e952c5d-16e9-4514-b417-c67299d151f6", APIVersion:"apps/v1", ResourceVersion:"2006", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-kgzp6
W1017 13:09:44.238] I1017 13:09:44.147453   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"nginx-deployment-6986c7bc94", UID:"2e952c5d-16e9-4514-b417-c67299d151f6", APIVersion:"apps/v1", ResourceVersion:"2006", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-nmvpv
W1017 13:09:44.239] I1017 13:09:44.147635   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"nginx-deployment-6986c7bc94", UID:"2e952c5d-16e9-4514-b417-c67299d151f6", APIVersion:"apps/v1", ResourceVersion:"2006", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-wskdp
I1017 13:09:44.339] apps.sh:239: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 3
I1017 13:09:44.375] (Bdeployment.apps "nginx-deployment" deleted
I1017 13:09:44.496] apps.sh:242: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:44.613] (Bapps.sh:246: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:44.716] (Bapps.sh:247: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:44.810] (Bdeployment.apps/nginx-deployment created
W1017 13:09:44.911] E1017 13:09:44.535576   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:44.911] E1017 13:09:44.674535   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:44.912] I1017 13:09:44.813169   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317781-3503", Name:"nginx-deployment", UID:"ad3f01f8-aa55-465b-b2f6-5a51f62781ea", APIVersion:"apps/v1", ResourceVersion:"2028", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7f6fc565b9 to 1
W1017 13:09:44.912] I1017 13:09:44.816320   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"nginx-deployment-7f6fc565b9", UID:"dabc75c3-57ea-482e-b8eb-a6273f68369d", APIVersion:"apps/v1", ResourceVersion:"2029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7f6fc565b9-fhd8d
W1017 13:09:44.912] E1017 13:09:44.830491   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:45.013] apps.sh:251: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
I1017 13:09:45.017] (Bdeployment.apps "nginx-deployment" deleted
W1017 13:09:45.118] E1017 13:09:45.017649   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:45.219] apps.sh:256: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:45.329] (Bapps.sh:257: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
I1017 13:09:45.526] (Breplicaset.apps "nginx-deployment-7f6fc565b9" deleted
W1017 13:09:45.627] E1017 13:09:45.536744   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:45.676] E1017 13:09:45.675698   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:45.776] apps.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:45.844] (Bdeployment.apps/nginx-deployment created
W1017 13:09:45.945] E1017 13:09:45.832943   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:45.945] I1017 13:09:45.847601   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317781-3503", Name:"nginx-deployment", UID:"9b75cb61-cd47-408e-8941-4c47b8b10894", APIVersion:"apps/v1", ResourceVersion:"2046", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1017 13:09:45.946] I1017 13:09:45.851803   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"nginx-deployment-6986c7bc94", UID:"374b7181-7fd3-44a6-b445-9d35707ff275", APIVersion:"apps/v1", ResourceVersion:"2047", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-gdzch
W1017 13:09:45.946] I1017 13:09:45.855666   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"nginx-deployment-6986c7bc94", UID:"374b7181-7fd3-44a6-b445-9d35707ff275", APIVersion:"apps/v1", ResourceVersion:"2047", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-4x9sz
W1017 13:09:45.946] I1017 13:09:45.855857   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"nginx-deployment-6986c7bc94", UID:"374b7181-7fd3-44a6-b445-9d35707ff275", APIVersion:"apps/v1", ResourceVersion:"2047", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-cn47p
W1017 13:09:46.019] E1017 13:09:46.019152   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:46.120] apps.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
I1017 13:09:46.120] (Bhorizontalpodautoscaler.autoscaling/nginx-deployment autoscaled
I1017 13:09:46.195] apps.sh:271: Successful get hpa nginx-deployment {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1017 13:09:46.289] (Bhorizontalpodautoscaler.autoscaling "nginx-deployment" deleted
I1017 13:09:46.387] deployment.apps "nginx-deployment" deleted
I1017 13:09:46.502] apps.sh:279: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1017 13:09:46.684] (Bdeployment.apps/nginx created
W1017 13:09:46.785] E1017 13:09:46.538982   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:46.786] E1017 13:09:46.677279   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:46.786] I1017 13:09:46.687681   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317781-3503", Name:"nginx", UID:"4703ac01-6f71-4f73-9bc0-e49da0cbdb99", APIVersion:"apps/v1", ResourceVersion:"2071", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W1017 13:09:46.787] I1017 13:09:46.691820   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"nginx-f87d999f7", UID:"c60d2887-b032-4fed-90ac-8ef429622183", APIVersion:"apps/v1", ResourceVersion:"2072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-2wcnv
W1017 13:09:46.787] I1017 13:09:46.694137   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"nginx-f87d999f7", UID:"c60d2887-b032-4fed-90ac-8ef429622183", APIVersion:"apps/v1", ResourceVersion:"2072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-6cq9m
W1017 13:09:46.788] I1017 13:09:46.695068   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"nginx-f87d999f7", UID:"c60d2887-b032-4fed-90ac-8ef429622183", APIVersion:"apps/v1", ResourceVersion:"2072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-pjv9x
W1017 13:09:46.835] E1017 13:09:46.834391   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:46.935] apps.sh:283: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1017 13:09:46.935] (Bapps.sh:284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:09:47.012] (Bdeployment.apps/nginx skipped rollback (current template already matches revision 1)
W1017 13:09:47.113] E1017 13:09:47.020812   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1017 13:09:47.213] apps.sh:287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1017 13:09:47.315] (Bdeployment.apps/nginx configured
W1017 13:09:47.416] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W1017 13:09:47.416] I1017 13:09:47.318898   53264 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571317781-3503", Name:"nginx", UID:"4703ac01-6f71-4f73-9bc0-e49da0cbdb99", APIVersion:"apps/v1", ResourceVersion:"2085", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-78487f9fd7 to 1
W1017 13:09:47.417] I1017 13:09:47.321836   53264 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571317781-3503", Name:"nginx-78487f9fd7", UID:"215f2cb4-63dc-441f-af07-05f2858e7108", APIVersion:"apps/v1", ResourceVersion:"2086", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-78487f9fd7-gw7p2
I1017 13:09:47.518] apps.sh:290: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1017 13:09:47.536] (B    Image:	k8s.gcr.io/nginx:test-cmd
I1017 13:09:47.634] apps.sh:293: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1017 13:09:47.734] (Bdeployment.apps/nginx rolled back
W1017 13:09:47.835] E1017 13:09:47.540362   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:47.836] E1017 13:09:47.678591   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:47.836] E1017 13:09:47.835894   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1017 13:09:48.022] E1017 13:09:48.022155   53264 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.g