This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 1 failed / 2898 succeeded
Started2019-10-18 17:42
Elapsed29m25s
Revision
Buildergke-prow-ssd-pool-1a225945-8w66
Refs master:54a30700
82703:823183a9
pod844374f7-f1ce-11e9-a6e9-3e8153a50efe
infra-commitb88ef36d5
pod844374f7-f1ce-11e9-a6e9-3e8153a50efe
repok8s.io/kubernetes
repo-commit19aaa05af11ae9c64dd41078879856ad8fe633ab
repos{u'k8s.io/kubernetes': u'master:54a30700a38452a5113adcfba0f98adcc5e05f2d,82703:823183a9166e58f9101fc9f94b047e697b4b5e0b'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestSchedulerCreationFromConfigMap 4.10s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestSchedulerCreationFromConfigMap$
=== RUN   TestSchedulerCreationFromConfigMap
W1018 18:08:42.756960  104271 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1018 18:08:42.757154  104271 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I1018 18:08:42.757245  104271 master.go:305] Node port range unspecified. Defaulting to 30000-32767.
I1018 18:08:42.757316  104271 master.go:261] Using reconciler: 
I1018 18:08:42.759193  104271 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.760179  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.760453  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.761563  104271 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1018 18:08:42.761652  104271 reflector.go:185] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1018 18:08:42.761667  104271 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.762053  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.762091  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.762869  104271 store.go:1342] Monitoring events count at <storage-prefix>//events
I1018 18:08:42.762965  104271 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.763037  104271 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1018 18:08:42.763197  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.763223  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.764133  104271 watch_cache.go:409] Replace watchCache (rev: 43769) 
I1018 18:08:42.764190  104271 watch_cache.go:409] Replace watchCache (rev: 43770) 
I1018 18:08:42.764965  104271 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1018 18:08:42.765012  104271 reflector.go:185] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1018 18:08:42.765309  104271 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.765656  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.766099  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.766099  104271 watch_cache.go:409] Replace watchCache (rev: 43770) 
I1018 18:08:42.766968  104271 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1018 18:08:42.767029  104271 reflector.go:185] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1018 18:08:42.767453  104271 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.767718  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.767868  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.768009  104271 watch_cache.go:409] Replace watchCache (rev: 43770) 
I1018 18:08:42.768945  104271 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1018 18:08:42.769137  104271 reflector.go:185] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1018 18:08:42.769163  104271 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.769487  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.769527  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.770423  104271 watch_cache.go:409] Replace watchCache (rev: 43771) 
I1018 18:08:42.770568  104271 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1018 18:08:42.770597  104271 reflector.go:185] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1018 18:08:42.770742  104271 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.771234  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.771299  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.772279  104271 watch_cache.go:409] Replace watchCache (rev: 43771) 
I1018 18:08:42.772446  104271 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1018 18:08:42.772489  104271 reflector.go:185] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1018 18:08:42.772630  104271 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.772877  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.772980  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.774871  104271 watch_cache.go:409] Replace watchCache (rev: 43772) 
I1018 18:08:42.776148  104271 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1018 18:08:42.776278  104271 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.776479  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.776482  104271 reflector.go:185] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1018 18:08:42.776503  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.777311  104271 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1018 18:08:42.777389  104271 reflector.go:185] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1018 18:08:42.777588  104271 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.777908  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.777933  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.778334  104271 watch_cache.go:409] Replace watchCache (rev: 43773) 
I1018 18:08:42.778933  104271 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1018 18:08:42.778976  104271 reflector.go:185] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1018 18:08:42.779127  104271 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.779352  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.779393  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.780052  104271 watch_cache.go:409] Replace watchCache (rev: 43773) 
I1018 18:08:42.780227  104271 watch_cache.go:409] Replace watchCache (rev: 43773) 
I1018 18:08:42.781134  104271 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1018 18:08:42.781257  104271 reflector.go:185] Listing and watching *core.Node from storage/cacher.go:/minions
I1018 18:08:42.781391  104271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.781627  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.781653  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.782436  104271 watch_cache.go:409] Replace watchCache (rev: 43774) 
I1018 18:08:42.782566  104271 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1018 18:08:42.782583  104271 reflector.go:185] Listing and watching *core.Pod from storage/cacher.go:/pods
I1018 18:08:42.782749  104271 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.782982  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.783005  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.783653  104271 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1018 18:08:42.783861  104271 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.783878  104271 reflector.go:185] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1018 18:08:42.783908  104271 watch_cache.go:409] Replace watchCache (rev: 43774) 
I1018 18:08:42.784089  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.784122  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.785165  104271 watch_cache.go:409] Replace watchCache (rev: 43774) 
I1018 18:08:42.785296  104271 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1018 18:08:42.785413  104271 reflector.go:185] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1018 18:08:42.785630  104271 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.786028  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.786170  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.786943  104271 watch_cache.go:409] Replace watchCache (rev: 43774) 
I1018 18:08:42.788221  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.788245  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.789062  104271 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.789270  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.789297  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.789986  104271 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1018 18:08:42.790017  104271 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1018 18:08:42.790065  104271 reflector.go:185] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1018 18:08:42.790470  104271 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.790699  104271 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.791302  104271 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.791987  104271 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.792564  104271 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.792917  104271 watch_cache.go:409] Replace watchCache (rev: 43776) 
I1018 18:08:42.793127  104271 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.793511  104271 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.793667  104271 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.793887  104271 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.794377  104271 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.794902  104271 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.795108  104271 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.795644  104271 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.795846  104271 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.796262  104271 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.796450  104271 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.796923  104271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.797185  104271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.797346  104271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.797484  104271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.797666  104271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.797818  104271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.797930  104271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.798383  104271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.798537  104271 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.799231  104271 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.799784  104271 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.800070  104271 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.800257  104271 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.800783  104271 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.801021  104271 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.801597  104271 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.802140  104271 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.802715  104271 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.803472  104271 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.803788  104271 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.804002  104271 master.go:453] Skipping disabled API group "auditregistration.k8s.io".
I1018 18:08:42.804123  104271 master.go:464] Enabling API group "authentication.k8s.io".
I1018 18:08:42.804202  104271 master.go:464] Enabling API group "authorization.k8s.io".
I1018 18:08:42.804458  104271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.805055  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.805182  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.806251  104271 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1018 18:08:42.806344  104271 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1018 18:08:42.806617  104271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.807348  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.807536  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.810005  104271 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1018 18:08:42.810051  104271 watch_cache.go:409] Replace watchCache (rev: 43781) 
I1018 18:08:42.810132  104271 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1018 18:08:42.811190  104271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.811478  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.811519  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.811629  104271 watch_cache.go:409] Replace watchCache (rev: 43782) 
I1018 18:08:42.812281  104271 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1018 18:08:42.812306  104271 master.go:464] Enabling API group "autoscaling".
I1018 18:08:42.812341  104271 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1018 18:08:42.812489  104271 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.812670  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.812691  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.813404  104271 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1018 18:08:42.813467  104271 watch_cache.go:409] Replace watchCache (rev: 43782) 
I1018 18:08:42.813468  104271 reflector.go:185] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1018 18:08:42.813602  104271 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.814003  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.814028  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.814483  104271 watch_cache.go:409] Replace watchCache (rev: 43782) 
I1018 18:08:42.814548  104271 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1018 18:08:42.814566  104271 master.go:464] Enabling API group "batch".
I1018 18:08:42.814669  104271 reflector.go:185] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1018 18:08:42.814730  104271 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.814928  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.814953  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.815953  104271 watch_cache.go:409] Replace watchCache (rev: 43783) 
I1018 18:08:42.816181  104271 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1018 18:08:42.816208  104271 master.go:464] Enabling API group "certificates.k8s.io".
I1018 18:08:42.816279  104271 reflector.go:185] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1018 18:08:42.816393  104271 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.816620  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.816641  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.818174  104271 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1018 18:08:42.818221  104271 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1018 18:08:42.818279  104271 watch_cache.go:409] Replace watchCache (rev: 43783) 
I1018 18:08:42.818355  104271 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.818537  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.818560  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.818884  104271 watch_cache.go:409] Replace watchCache (rev: 43783) 
I1018 18:08:42.819933  104271 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1018 18:08:42.820013  104271 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1018 18:08:42.820032  104271 master.go:464] Enabling API group "coordination.k8s.io".
I1018 18:08:42.820139  104271 master.go:453] Skipping disabled API group "discovery.k8s.io".
I1018 18:08:42.820314  104271 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.820497  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.820519  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.821699  104271 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1018 18:08:42.821722  104271 watch_cache.go:409] Replace watchCache (rev: 43784) 
I1018 18:08:42.821736  104271 master.go:464] Enabling API group "extensions".
I1018 18:08:42.822028  104271 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1018 18:08:42.822069  104271 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.822529  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.822733  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.823645  104271 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1018 18:08:42.823713  104271 reflector.go:185] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1018 18:08:42.824117  104271 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.824309  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.824337  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.825158  104271 watch_cache.go:409] Replace watchCache (rev: 43785) 
I1018 18:08:42.825970  104271 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1018 18:08:42.825995  104271 master.go:464] Enabling API group "networking.k8s.io".
I1018 18:08:42.826047  104271 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1018 18:08:42.826062  104271 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.826297  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.826324  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.827347  104271 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1018 18:08:42.827376  104271 master.go:464] Enabling API group "node.k8s.io".
I1018 18:08:42.827395  104271 reflector.go:185] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1018 18:08:42.827676  104271 watch_cache.go:409] Replace watchCache (rev: 43785) 
I1018 18:08:42.827712  104271 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.827734  104271 watch_cache.go:409] Replace watchCache (rev: 43785) 
I1018 18:08:42.828087  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.828113  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.829056  104271 watch_cache.go:409] Replace watchCache (rev: 43785) 
I1018 18:08:42.830383  104271 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1018 18:08:42.830413  104271 reflector.go:185] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1018 18:08:42.830817  104271 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.830984  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.832024  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.831693  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.832804  104271 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1018 18:08:42.832833  104271 master.go:464] Enabling API group "policy".
I1018 18:08:42.832893  104271 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.832980  104271 reflector.go:185] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1018 18:08:42.834449  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.835421  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.835724  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.836666  104271 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1018 18:08:42.836753  104271 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1018 18:08:42.836902  104271 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.837127  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.837154  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.838238  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.838611  104271 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1018 18:08:42.838657  104271 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1018 18:08:42.838667  104271 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.838885  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.838907  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.840383  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.840453  104271 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1018 18:08:42.840640  104271 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.840882  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.841023  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.841053  104271 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1018 18:08:42.842230  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.844054  104271 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1018 18:08:42.844137  104271 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.844301  104271 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1018 18:08:42.845429  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.846749  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.846803  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.848124  104271 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1018 18:08:42.848289  104271 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1018 18:08:42.851554  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.852964  104271 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.854719  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.854760  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.855473  104271 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1018 18:08:42.855539  104271 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1018 18:08:42.855545  104271 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.855760  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.855810  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.856655  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.857175  104271 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1018 18:08:42.857223  104271 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1018 18:08:42.857373  104271 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.857956  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.858021  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.858421  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.858609  104271 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1018 18:08:42.858642  104271 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1018 18:08:42.858646  104271 master.go:464] Enabling API group "rbac.authorization.k8s.io".
I1018 18:08:42.859621  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.860596  104271 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.860895  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.860929  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.861814  104271 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1018 18:08:42.861951  104271 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1018 18:08:42.862128  104271 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.862385  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.862452  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.863363  104271 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1018 18:08:42.863446  104271 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1018 18:08:42.863719  104271 master.go:464] Enabling API group "scheduling.k8s.io".
I1018 18:08:42.863842  104271 master.go:453] Skipping disabled API group "settings.k8s.io".
I1018 18:08:42.864161  104271 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.864467  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.864494  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.864517  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.865255  104271 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1018 18:08:42.865374  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.865435  104271 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.865515  104271 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1018 18:08:42.865645  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.865673  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.866381  104271 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1018 18:08:42.866415  104271 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1018 18:08:42.866552  104271 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.866756  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.867055  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.867077  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.867563  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.868064  104271 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1018 18:08:42.868091  104271 reflector.go:185] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1018 18:08:42.868122  104271 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.868311  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.868353  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.869436  104271 watch_cache.go:409] Replace watchCache (rev: 43786) 
I1018 18:08:42.870969  104271 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1018 18:08:42.871364  104271 reflector.go:185] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1018 18:08:42.871382  104271 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.871856  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.871953  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.872618  104271 watch_cache.go:409] Replace watchCache (rev: 43787) 
I1018 18:08:42.873965  104271 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1018 18:08:42.874036  104271 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1018 18:08:42.875096  104271 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.875141  104271 watch_cache.go:409] Replace watchCache (rev: 43787) 
I1018 18:08:42.875434  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.875570  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.876748  104271 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1018 18:08:42.876786  104271 master.go:464] Enabling API group "storage.k8s.io".
I1018 18:08:42.876931  104271 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1018 18:08:42.876984  104271 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.877231  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.877254  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.877825  104271 watch_cache.go:409] Replace watchCache (rev: 43787) 
I1018 18:08:42.879619  104271 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1018 18:08:42.879707  104271 reflector.go:185] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1018 18:08:42.879839  104271 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.880086  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.880111  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.880598  104271 watch_cache.go:409] Replace watchCache (rev: 43787) 
I1018 18:08:42.881100  104271 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1018 18:08:42.881168  104271 reflector.go:185] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1018 18:08:42.881349  104271 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.881580  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.881619  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.882221  104271 watch_cache.go:409] Replace watchCache (rev: 43787) 
I1018 18:08:42.883109  104271 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1018 18:08:42.883146  104271 reflector.go:185] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1018 18:08:42.883250  104271 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.883630  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.883659  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.884309  104271 watch_cache.go:409] Replace watchCache (rev: 43787) 
I1018 18:08:42.884735  104271 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1018 18:08:42.884798  104271 reflector.go:185] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1018 18:08:42.884930  104271 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.885212  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.885273  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.885725  104271 watch_cache.go:409] Replace watchCache (rev: 43787) 
I1018 18:08:42.886625  104271 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1018 18:08:42.886652  104271 master.go:464] Enabling API group "apps".
I1018 18:08:42.886708  104271 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.886930  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.886956  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.887040  104271 reflector.go:185] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1018 18:08:42.887722  104271 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1018 18:08:42.887933  104271 watch_cache.go:409] Replace watchCache (rev: 43787) 
I1018 18:08:42.887945  104271 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1018 18:08:42.888113  104271 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.888497  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.888598  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.889339  104271 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1018 18:08:42.889360  104271 watch_cache.go:409] Replace watchCache (rev: 43787) 
I1018 18:08:42.889390  104271 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1018 18:08:42.889506  104271 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.889937  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.889960  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.891043  104271 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1018 18:08:42.891103  104271 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.891274  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.891342  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.891392  104271 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1018 18:08:42.891888  104271 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1018 18:08:42.891926  104271 master.go:464] Enabling API group "admissionregistration.k8s.io".
I1018 18:08:42.891967  104271 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1018 18:08:42.891980  104271 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.892323  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:42.892350  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:42.892813  104271 watch_cache.go:409] Replace watchCache (rev: 43787) 
I1018 18:08:42.892972  104271 watch_cache.go:409] Replace watchCache (rev: 43787) 
I1018 18:08:42.893111  104271 store.go:1342] Monitoring events count at <storage-prefix>//events
I1018 18:08:42.893137  104271 master.go:464] Enabling API group "events.k8s.io".
I1018 18:08:42.893321  104271 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1018 18:08:42.893356  104271 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.893576  104271 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.893878  104271 watch_cache.go:409] Replace watchCache (rev: 43787) 
I1018 18:08:42.893953  104271 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.894101  104271 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.894314  104271 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.894440  104271 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.894672  104271 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.894828  104271 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.894894  104271 watch_cache.go:409] Replace watchCache (rev: 43787) 
I1018 18:08:42.894967  104271 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.895083  104271 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.896744  104271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.897051  104271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.897660  104271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.897930  104271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.898909  104271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.899259  104271 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.900028  104271 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.900346  104271 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.901032  104271 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.901336  104271 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1018 18:08:42.901419  104271 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1018 18:08:42.902202  104271 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.902441  104271 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.902874  104271 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.903525  104271 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.904381  104271 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.905232  104271 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.905657  104271 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.906559  104271 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.909908  104271 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.910523  104271 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.911663  104271 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1018 18:08:42.911746  104271 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1018 18:08:42.912552  104271 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.912935  104271 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.913402  104271 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.913950  104271 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.914443  104271 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.915096  104271 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.915639  104271 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.916125  104271 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.916576  104271 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.917262  104271 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.918168  104271 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1018 18:08:42.918329  104271 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1018 18:08:42.919000  104271 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.919526  104271 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1018 18:08:42.919631  104271 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1018 18:08:42.920187  104271 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.920733  104271 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.921151  104271 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.921703  104271 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.922164  104271 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.922749  104271 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.923465  104271 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1018 18:08:42.923584  104271 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1018 18:08:42.924256  104271 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.924856  104271 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.925378  104271 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.926123  104271 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.926435  104271 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.926830  104271 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.927540  104271 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.927873  104271 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.928179  104271 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.929027  104271 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.929615  104271 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.930116  104271 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1018 18:08:42.930283  104271 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1018 18:08:42.930391  104271 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1018 18:08:42.931239  104271 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.931945  104271 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.932723  104271 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.933334  104271 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.934296  104271 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"e5d77689-4e08-4f86-916d-91362df530e6", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 18:08:42.938013  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:42.938047  104271 healthz.go:177] healthz check poststarthook/generic-apiserver-start-informers failed: not finished
I1018 18:08:42.938059  104271 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1018 18:08:42.938069  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:42.938079  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:42.938087  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:42.938095  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[-]poststarthook/generic-apiserver-start-informers failed: reason withheld
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:42.938247  104271 httplog.go:90] GET /healthz: (528.116µs) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:42.940600  104271 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.103485ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46388]
I1018 18:08:42.946401  104271 httplog.go:90] GET /api/v1/services: (1.920784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46388]
I1018 18:08:42.951315  104271 httplog.go:90] GET /api/v1/services: (1.950742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46388]
I1018 18:08:42.954178  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:42.954206  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:42.954214  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:42.954221  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:42.954226  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:42.954247  104271 httplog.go:90] GET /healthz: (188.02µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46388]
I1018 18:08:42.956168  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.929649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:42.957531  104271 httplog.go:90] GET /api/v1/services: (1.565272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46388]
I1018 18:08:42.958378  104271 httplog.go:90] POST /api/v1/namespaces: (1.788946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:42.958716  104271 httplog.go:90] GET /api/v1/services: (2.245006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46390]
I1018 18:08:42.960220  104271 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.03924ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:42.962177  104271 httplog.go:90] POST /api/v1/namespaces: (1.505561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:42.963464  104271 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (991.063µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:42.965156  104271 httplog.go:90] POST /api/v1/namespaces: (1.306774ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.039451  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.039494  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.039509  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.039520  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.039529  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.039572  104271 httplog.go:90] GET /healthz: (282.773µs) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:43.054923  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.055143  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.055296  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.055386  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.055472  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.055903  104271 httplog.go:90] GET /healthz: (995.233µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.139420  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.139805  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.139989  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.140093  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.140183  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.140457  104271 httplog.go:90] GET /healthz: (1.18506ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:43.154974  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.155014  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.155029  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.155039  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.155048  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.155161  104271 httplog.go:90] GET /healthz: (295.776µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.239487  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.239690  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.239792  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.239873  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.240580  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.240807  104271 httplog.go:90] GET /healthz: (1.453705ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:43.255032  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.255221  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.255317  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.255428  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.255560  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.255800  104271 httplog.go:90] GET /healthz: (947.312µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.339394  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.339432  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.339442  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.339448  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.339479  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.339518  104271 httplog.go:90] GET /healthz: (250.855µs) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:43.355069  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.355104  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.355117  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.355152  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.355161  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.355200  104271 httplog.go:90] GET /healthz: (426.694µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.439405  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.439443  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.439455  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.439464  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.439472  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.439504  104271 httplog.go:90] GET /healthz: (246.092µs) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:43.455058  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.455094  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.455104  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.455110  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.455116  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.455149  104271 httplog.go:90] GET /healthz: (459.26µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.539362  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.539399  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.539411  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.539420  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.539427  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.539464  104271 httplog.go:90] GET /healthz: (245.481µs) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:43.554942  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.554980  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.554993  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.555002  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.555010  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.555053  104271 httplog.go:90] GET /healthz: (243.064µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.639455  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.639496  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.639508  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.639517  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.639526  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.639557  104271 httplog.go:90] GET /healthz: (287.289µs) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:43.655275  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.655327  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.655341  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.655351  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.655360  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.655414  104271 httplog.go:90] GET /healthz: (697.156µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.739413  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.739463  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.739477  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.739487  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.739495  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.739529  104271 httplog.go:90] GET /healthz: (273.638µs) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:43.754929  104271 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 18:08:43.754962  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.755044  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.755085  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.755243  104271 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.755488  104271 httplog.go:90] GET /healthz: (713.844µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.756602  104271 client.go:357] parsed scheme: "endpoint"
I1018 18:08:43.756854  104271 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 18:08:43.841013  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.841061  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.841073  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.841084  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.841166  104271 httplog.go:90] GET /healthz: (1.927569ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:43.856111  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.856301  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.856397  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.856468  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.856662  104271 httplog.go:90] GET /healthz: (1.904736ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.939516  104271 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.238081ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.939716  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.20248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46500]
I1018 18:08:43.940217  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.940241  104271 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 18:08:43.940250  104271 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 18:08:43.940258  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 18:08:43.940290  104271 httplog.go:90] GET /healthz: (855.883µs) 0 [Go-http-client/1.1 127.0.0.1:46502]
I1018 18:08:43.941208  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.029924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46500]
I1018 18:08:43.941507  104271 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.499797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.942978  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.409186ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46500]
I1018 18:08:43.943046  104271 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1018 18:08:43.943213  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.928577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46388]
I1018 18:08:43.944954  104271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.435868ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46388]
I1018 18:08:43.945348  104271 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.149414ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.945422  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.032497ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46500]
I1018 18:08:43.947195  104271 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.514013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.947399  104271 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1018 18:08:43.947424  104271 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1018 18:08:43.948042  104271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.699504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46388]
I1018 18:08:43.948044  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (2.312116ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.949093  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (717.877µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.953153  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (3.710878ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.954625  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (945.9µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.955506  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:43.955526  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:43.955560  104271 httplog.go:90] GET /healthz: (880.983µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:43.956403  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (936.121µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.958399  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.677625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.960700  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.723941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.960938  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1018 18:08:43.962067  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (946.663µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.964001  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.635692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.964165  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1018 18:08:43.965748  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.353865ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.967935  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.624981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.968142  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1018 18:08:43.969380  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.036445ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.974862  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.829598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.975339  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1018 18:08:43.978068  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.006567ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.980140  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.66624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.980386  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1018 18:08:43.983198  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (2.632795ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.985654  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.964467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.986178  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1018 18:08:43.987500  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (953.503µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.992998  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.684574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.993249  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1018 18:08:43.994636  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.15978ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.996650  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.599923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:43.996858  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1018 18:08:43.998374  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.253882ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.000814  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.932594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.001124  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1018 18:08:44.002505  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (992.472µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.005281  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.343725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.005633  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1018 18:08:44.007120  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.117473ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.010971  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.94589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.011446  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1018 18:08:44.012483  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (855.122µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.015042  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.000462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.015287  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1018 18:08:44.016464  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (998.27µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.018269  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.400555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.018449  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1018 18:08:44.019288  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (659.352µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.022774  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.146431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.023056  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1018 18:08:44.024498  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.005756ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.027142  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.120392ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.027366  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1018 18:08:44.028715  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.171873ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.030669  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.328224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.030966  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1018 18:08:44.032752  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.640574ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.036087  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.754478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.036263  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1018 18:08:44.037382  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (939.348µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.039741  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.885463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.040976  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1018 18:08:44.043748  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.044065  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.044295  104271 httplog.go:90] GET /healthz: (5.062272ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:44.044021  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (2.629353ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.046746  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.73804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.047179  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1018 18:08:44.048381  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.008962ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.052328  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.199089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.052657  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1018 18:08:44.054529  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.495691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.055719  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.055752  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.055969  104271 httplog.go:90] GET /healthz: (1.301332ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.056742  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.703129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.057196  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1018 18:08:44.058489  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.064036ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.060572  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.656702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.060995  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1018 18:08:44.062732  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.40365ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.065865  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.579659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.066165  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1018 18:08:44.067467  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.032367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.070567  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.507054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.070857  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1018 18:08:44.074625  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (3.502298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.077430  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.782168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.077689  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1018 18:08:44.079081  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.201282ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.082392  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.7143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.082593  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1018 18:08:44.084162  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.006846ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.086054  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.594161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.086330  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1018 18:08:44.087570  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.074125ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.090535  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.189582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.091007  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1018 18:08:44.092687  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.51589ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.094717  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.631618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.095149  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
E1018 18:08:44.095512  104271 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:43525/apis/events.k8s.io/v1beta1/namespaces/permit-pluginb5ac3495-91e1-43f7-b456-69df168503a9/events: dial tcp 127.0.0.1:43525: connect: connection refused' (may retry after sleeping)
E1018 18:08:44.095545  104271 event_broadcaster.go:197] Unable to write event '&v1beta1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"test-pod.15cecffba91de677", GenerateName:"", Namespace:"permit-pluginb5ac3495-91e1-43f7-b456-69df168503a9", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xbf629dabee7c7e9b, ext:52153626428, loc:(*time.Location)(0xa8efa80)}}, Series:(*v1beta1.EventSeries)(nil), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-15d470b51f19", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"permit-pluginb5ac3495-91e1-43f7-b456-69df168503a9", Name:"test-pod", UID:"dc0ce8d1-199a-48ea-9afc-87a5db025164", APIVersion:"v1", ResourceVersion:"29108", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"pod \"test-pod\" rejected due to timeout after waiting 3s at permit", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"default-scheduler", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}' (retry limit exceeded!)
I1018 18:08:44.096576  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.214397ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.098824  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.778236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.099068  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1018 18:08:44.100678  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.302272ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.103258  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.059037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.103466  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1018 18:08:44.104682  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.063967ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.107397  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.125933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.107650  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1018 18:08:44.109188  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.334021ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.112295  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.634572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.112623  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1018 18:08:44.114283  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.202941ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.116327  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.641199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.116573  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1018 18:08:44.117908  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.084516ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.121021  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.933577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.121204  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1018 18:08:44.122152  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (767.198µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.124520  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.861657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.124986  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1018 18:08:44.126358  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (976.142µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.128485  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.511327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.129019  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1018 18:08:44.130503  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.268523ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.134246  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.033941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.134633  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1018 18:08:44.135969  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.000174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.138210  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.867811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.138512  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1018 18:08:44.139611  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (828.101µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.140296  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.140331  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.140376  104271 httplog.go:90] GET /healthz: (1.047113ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:44.142043  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.547131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.142261  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1018 18:08:44.143553  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.016469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.146176  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.13451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.146523  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1018 18:08:44.147760  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (963.304µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.151832  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.404131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.152205  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1018 18:08:44.153461  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (990.535µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.155575  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.155610  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.155646  104271 httplog.go:90] GET /healthz: (1.024114ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.156107  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.125375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.156379  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1018 18:08:44.157583  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (962.326µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.159855  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.739359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.160047  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1018 18:08:44.161207  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (981.735µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.163578  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.92215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.163740  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1018 18:08:44.165230  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.08627ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.167526  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.899737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.167973  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1018 18:08:44.169403  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.121613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.172037  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.742511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.172318  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1018 18:08:44.173904  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.225659ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.176696  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.937383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.177245  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1018 18:08:44.178500  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (878.893µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.180552  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.391392ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.180787  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1018 18:08:44.181900  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (846.26µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.183967  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.596134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.184160  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1018 18:08:44.185287  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (955.576µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.187637  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.637729ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.187856  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1018 18:08:44.188948  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (907.783µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.199978  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.752103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.200200  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1018 18:08:44.219502  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.359649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.240472  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.240511  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.240545  104271 httplog.go:90] GET /healthz: (1.244599ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:44.241249  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.112494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.241471  104271 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1018 18:08:44.256285  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.256333  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.256429  104271 httplog.go:90] GET /healthz: (1.507977ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.259515  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.574971ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.280371  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.317325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.280587  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1018 18:08:44.299303  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.35321ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.322569  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.142473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.323017  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1018 18:08:44.340229  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.340258  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.340288  104271 httplog.go:90] GET /healthz: (951.904µs) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:44.340535  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (2.459139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.356578  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.356613  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.356649  104271 httplog.go:90] GET /healthz: (1.72276ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.360516  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.503356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.360887  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1018 18:08:44.379690  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.568455ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.400464  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.2463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.400875  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1018 18:08:44.419711  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.566999ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.440051  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.828684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.440066  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.440125  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.440163  104271 httplog.go:90] GET /healthz: (1.021764ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:44.440244  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1018 18:08:44.456032  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.456063  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.456113  104271 httplog.go:90] GET /healthz: (1.362725ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.458980  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.152734ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.480394  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.394113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.480684  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1018 18:08:44.499293  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.208345ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.529666  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (11.318334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.529964  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1018 18:08:44.539942  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.892239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.540950  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.540976  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.541010  104271 httplog.go:90] GET /healthz: (1.308792ms) 0 [Go-http-client/1.1 127.0.0.1:46502]
I1018 18:08:44.559974  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.032688ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.559974  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.560073  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.560115  104271 httplog.go:90] GET /healthz: (5.164033ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.560186  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1018 18:08:44.579349  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.410786ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.604752  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.432504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.604995  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1018 18:08:44.619119  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.179824ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.641324  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.641362  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.641403  104271 httplog.go:90] GET /healthz: (2.197992ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:44.641745  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.887652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.642512  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1018 18:08:44.657540  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.657574  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.657613  104271 httplog.go:90] GET /healthz: (2.836493ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.660068  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (2.108521ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.680599  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.654628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.681078  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1018 18:08:44.699089  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.112367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.721365  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.431054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.721622  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1018 18:08:44.739555  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.587367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.740672  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.740698  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.740737  104271 httplog.go:90] GET /healthz: (1.393186ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:44.770244  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.770275  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.770333  104271 httplog.go:90] GET /healthz: (15.529884ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.770663  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (12.716004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.770949  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1018 18:08:44.781987  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (3.89806ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.800312  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.977026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.800564  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1018 18:08:44.820711  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (2.202433ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.840529  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.840558  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.840595  104271 httplog.go:90] GET /healthz: (1.298513ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:44.841896  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.89585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.842168  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1018 18:08:44.858198  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.858246  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.858287  104271 httplog.go:90] GET /healthz: (3.475805ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.859479  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.42608ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.881723  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.819526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.882516  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1018 18:08:44.899755  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.848286ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.921293  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.964979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:44.921660  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1018 18:08:44.940436  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.940462  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.940504  104271 httplog.go:90] GET /healthz: (1.211212ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:44.942476  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (3.182369ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.956861  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:44.956898  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:44.956933  104271 httplog.go:90] GET /healthz: (2.19046ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.959953  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.064827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:44.960553  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1018 18:08:44.979922  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.994513ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.003238  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.292692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.003520  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1018 18:08:45.019960  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.876896ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.041153  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.041192  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.041231  104271 httplog.go:90] GET /healthz: (1.827044ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:45.041670  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.610254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.041931  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1018 18:08:45.060506  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.060553  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.060588  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (2.678296ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.060595  104271 httplog.go:90] GET /healthz: (5.833879ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.081662  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.471316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.081996  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1018 18:08:45.099904  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.232746ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.120801  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.602873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.121058  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1018 18:08:45.140209  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.140239  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.140281  104271 httplog.go:90] GET /healthz: (951.03µs) 0 [Go-http-client/1.1 127.0.0.1:46502]
I1018 18:08:45.143425  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (5.431125ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.156276  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.156313  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.156380  104271 httplog.go:90] GET /healthz: (1.396696ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.160175  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.054883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.160454  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1018 18:08:45.179119  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.080548ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.200333  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.222841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.200583  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1018 18:08:45.219764  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.786059ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.240868  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.240897  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.240937  104271 httplog.go:90] GET /healthz: (1.581603ms) 0 [Go-http-client/1.1 127.0.0.1:46502]
I1018 18:08:45.240991  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.828539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.241262  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1018 18:08:45.256435  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.256470  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.256518  104271 httplog.go:90] GET /healthz: (1.303044ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.259562  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.68429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.296919  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (18.609531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.297209  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1018 18:08:45.314184  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (16.366706ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.320514  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.623972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.320820  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1018 18:08:45.344071  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (6.135301ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.345877  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.345906  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.345946  104271 httplog.go:90] GET /healthz: (6.70973ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:45.355954  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.355994  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.356052  104271 httplog.go:90] GET /healthz: (1.187286ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.362431  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.541859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.363524  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1018 18:08:45.379703  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.676868ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.416602  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (16.469444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.416917  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1018 18:08:45.420300  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.123391ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.440267  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.276918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.440577  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1018 18:08:45.441547  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.441575  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.441619  104271 httplog.go:90] GET /healthz: (1.627316ms) 0 [Go-http-client/1.1 127.0.0.1:46502]
I1018 18:08:45.456667  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.456697  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.456736  104271 httplog.go:90] GET /healthz: (1.805212ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.459034  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.038415ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.480704  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.589794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.480968  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1018 18:08:45.498942  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.021616ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.527916  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (9.175755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.528271  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1018 18:08:45.539686  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.532515ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.540629  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.540655  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.540691  104271 httplog.go:90] GET /healthz: (1.516428ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:45.555790  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.555821  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.555890  104271 httplog.go:90] GET /healthz: (1.129766ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.560157  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.155354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.560395  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1018 18:08:45.580156  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (2.23452ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.600249  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.122903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.600487  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1018 18:08:45.619067  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.10874ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.641130  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.641164  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.641201  104271 httplog.go:90] GET /healthz: (1.789832ms) 0 [Go-http-client/1.1 127.0.0.1:46502]
I1018 18:08:45.641405  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.386314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.642319  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1018 18:08:45.657053  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.657105  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.657143  104271 httplog.go:90] GET /healthz: (2.330399ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.667515  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.278607ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.684598  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.684708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.684878  104271 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1018 18:08:45.699128  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.176741ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.701339  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.809139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.720364  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.389225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.720631  104271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1018 18:08:45.741414  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (3.398457ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.742147  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.742173  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.742206  104271 httplog.go:90] GET /healthz: (1.019617ms) 0 [Go-http-client/1.1 127.0.0.1:46502]
I1018 18:08:45.746598  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.705537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.755792  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.755828  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.755877  104271 httplog.go:90] GET /healthz: (1.118629ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.759917  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.96706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.760165  104271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1018 18:08:45.779452  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.339415ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.782101  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.190425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.800332  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.256201ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.800568  104271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1018 18:08:45.819514  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.575558ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.821519  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.453127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.841587  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.649321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:45.841959  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.841987  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.842019  104271 httplog.go:90] GET /healthz: (2.840446ms) 0 [Go-http-client/1.1 127.0.0.1:46502]
I1018 18:08:45.842211  104271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1018 18:08:45.855933  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.855963  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.856002  104271 httplog.go:90] GET /healthz: (1.299877ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.859018  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.175899ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.860895  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.50587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.883556  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (5.576503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.885308  104271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1018 18:08:45.899291  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.322571ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.901484  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.491569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.920293  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.209444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.920558  104271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1018 18:08:45.940467  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.068493ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.940686  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.940710  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.940747  104271 httplog.go:90] GET /healthz: (1.53787ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:45.942348  104271 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.455451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.955881  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:45.955907  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:45.955960  104271 httplog.go:90] GET /healthz: (1.204703ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.959920  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.025469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.960411  104271 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1018 18:08:45.980682  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (2.522846ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:45.985407  104271 httplog.go:90] GET /api/v1/namespaces/kube-public: (4.244949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:46.000387  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.426116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:46.001315  104271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1018 18:08:46.021259  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.519422ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:46.025337  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.562233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:46.040700  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.50439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:46.040968  104271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1018 18:08:46.044065  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:46.044089  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:46.044128  104271 httplog.go:90] GET /healthz: (3.951298ms) 0 [Go-http-client/1.1 127.0.0.1:46386]
I1018 18:08:46.056360  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:46.056391  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:46.056432  104271 httplog.go:90] GET /healthz: (998.511µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.058964  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.042456ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.062430  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.97176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.090016  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (11.876082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.090431  104271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1018 18:08:46.099123  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.229805ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.100738  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.148496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.120314  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.362646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.120823  104271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1018 18:08:46.142333  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (4.353565ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.142929  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:46.142954  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:46.142989  104271 httplog.go:90] GET /healthz: (1.243238ms) 0 [Go-http-client/1.1 127.0.0.1:46502]
I1018 18:08:46.143844  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.083286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.155499  104271 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 18:08:46.155526  104271 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 18:08:46.155567  104271 httplog.go:90] GET /healthz: (824.753µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.161460  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.110213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.162034  104271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1018 18:08:46.179729  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.757772ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.183537  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.150925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.199516  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.549518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.199752  104271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1018 18:08:46.219562  104271 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.592201ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.221399  104271 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.351713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.240383  104271 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.223258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.240941  104271 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1018 18:08:46.241409  104271 httplog.go:90] GET /healthz: (904.982µs) 200 [Go-http-client/1.1 127.0.0.1:46502]
I1018 18:08:46.244740  104271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.406381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
W1018 18:08:46.245227  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 18:08:46.245271  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 18:08:46.245286  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 18:08:46.245318  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 18:08:46.245347  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 18:08:46.245357  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 18:08:46.245371  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 18:08:46.245384  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 18:08:46.245397  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 18:08:46.245435  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 18:08:46.245454  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 18:08:46.246697  104271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-0: (1.043356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:46.247159  104271 factory.go:291] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I1018 18:08:46.247215  104271 factory.go:308] Registering predicate: PredicateOne
I1018 18:08:46.247233  104271 algorithm_factory.go:288] Predicate type PredicateOne already registered, reusing.
I1018 18:08:46.247241  104271 factory.go:308] Registering predicate: PredicateTwo
I1018 18:08:46.247246  104271 algorithm_factory.go:288] Predicate type PredicateTwo already registered, reusing.
I1018 18:08:46.247252  104271 factory.go:323] Registering priority: PriorityOne
I1018 18:08:46.247259  104271 algorithm_factory.go:399] Priority type PriorityOne already registered, reusing.
I1018 18:08:46.247271  104271 factory.go:323] Registering priority: PriorityTwo
I1018 18:08:46.247277  104271 algorithm_factory.go:399] Priority type PriorityTwo already registered, reusing.
I1018 18:08:46.247288  104271 factory.go:369] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I1018 18:08:46.256408  104271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (8.554263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
W1018 18:08:46.256727  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 18:08:46.257156  104271 httplog.go:90] GET /healthz: (2.437978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.258298  104271 httplog.go:90] GET /api/v1/namespaces/default: (775.443µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.259677  104271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-1: (2.590826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:46.260076  104271 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1018 18:08:46.260101  104271 factory.go:300] Using predicates from algorithm provider 'DefaultProvider'
I1018 18:08:46.260113  104271 factory.go:315] Using priorities from algorithm provider 'DefaultProvider'
I1018 18:08:46.260120  104271 factory.go:369] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1018 18:08:46.260283  104271 httplog.go:90] POST /api/v1/namespaces: (1.620102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.263252  104271 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.622779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.263696  104271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.587393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
W1018 18:08:46.263946  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 18:08:46.265171  104271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-2: (958.54µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:46.265474  104271 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1018 18:08:46.265506  104271 factory.go:369] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I1018 18:08:46.267536  104271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.404079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
W1018 18:08:46.267795  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 18:08:46.268303  104271 httplog.go:90] POST /api/v1/namespaces/default/services: (4.6465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.269825  104271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-3: (1.740665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
I1018 18:08:46.270426  104271 factory.go:291] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I1018 18:08:46.270458  104271 factory.go:308] Registering predicate: PredicateOne
I1018 18:08:46.270468  104271 algorithm_factory.go:288] Predicate type PredicateOne already registered, reusing.
I1018 18:08:46.270476  104271 factory.go:308] Registering predicate: PredicateTwo
I1018 18:08:46.270482  104271 algorithm_factory.go:288] Predicate type PredicateTwo already registered, reusing.
I1018 18:08:46.270490  104271 factory.go:323] Registering priority: PriorityOne
I1018 18:08:46.270498  104271 algorithm_factory.go:399] Priority type PriorityOne already registered, reusing.
I1018 18:08:46.270509  104271 factory.go:323] Registering priority: PriorityTwo
I1018 18:08:46.270515  104271 algorithm_factory.go:399] Priority type PriorityTwo already registered, reusing.
I1018 18:08:46.270523  104271 factory.go:369] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I1018 18:08:46.272948  104271 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.680624ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.272957  104271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.978884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
W1018 18:08:46.273354  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 18:08:46.274298  104271 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (554.453µs) 422 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46502]
E1018 18:08:46.274568  104271 controller.go:227] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I1018 18:08:46.274808  104271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-4: (1.215296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.275451  104271 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1018 18:08:46.275643  104271 factory.go:300] Using predicates from algorithm provider 'DefaultProvider'
I1018 18:08:46.275748  104271 factory.go:315] Using priorities from algorithm provider 'DefaultProvider'
I1018 18:08:46.275846  104271 factory.go:369] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1018 18:08:46.442374  104271 request.go:538] Throttling request took 165.65421ms, request: POST:http://127.0.0.1:43859/api/v1/namespaces/kube-system/configmaps
I1018 18:08:46.444490  104271 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.803326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
W1018 18:08:46.444801  104271 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 18:08:46.642404  104271 request.go:538] Throttling request took 197.365087ms, request: GET:http://127.0.0.1:43859/api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5
I1018 18:08:46.644014  104271 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5: (1.396268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.644441  104271 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1018 18:08:46.644467  104271 factory.go:369] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I1018 18:08:46.842341  104271 request.go:538] Throttling request took 197.548536ms, request: DELETE:http://127.0.0.1:43859/api/v1/nodes
I1018 18:08:46.844254  104271 httplog.go:90] DELETE /api/v1/nodes: (1.638438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
I1018 18:08:46.845083  104271 controller.go:185] Shutting down kubernetes service endpoint reconciler
I1018 18:08:46.850999  104271 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (5.620656ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46386]
--- FAIL: TestSchedulerCreationFromConfigMap (4.10s)
    scheduler_test.go:312: Expected predicates map[CheckNodeUnschedulable:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}], got map[CheckNodeUnschedulable:{} MaxAzureDiskVolumeCount:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}]
    scheduler_test.go:312: Expected predicates map[CheckNodeUnschedulable:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}], got map[CheckNodeUnschedulable:{} MaxAzureDiskVolumeCount:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}]

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20191018-175928.xml

Filter through log files | View test history on testgrid


Show 2898 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 604 lines ...
W1018 17:54:19.575] I1018 17:54:19.497372   53086 controllermanager.go:534] Started "garbagecollector"
W1018 17:54:19.576] W1018 17:54:19.497401   53086 controllermanager.go:513] "bootstrapsigner" is disabled
W1018 17:54:19.576] I1018 17:54:19.497407   53086 garbagecollector.go:130] Starting garbage collector controller
W1018 17:54:19.577] I1018 17:54:19.497454   53086 shared_informer.go:197] Waiting for caches to sync for garbage collector
W1018 17:54:19.577] I1018 17:54:19.497493   53086 graph_builder.go:282] GraphBuilder running
W1018 17:54:19.577] I1018 17:54:19.497815   53086 node_lifecycle_controller.go:77] Sending events to api server
W1018 17:54:19.578] E1018 17:54:19.497867   53086 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W1018 17:54:19.578] W1018 17:54:19.497880   53086 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W1018 17:54:19.578] I1018 17:54:19.498342   53086 controllermanager.go:534] Started "pvc-protection"
W1018 17:54:19.579] I1018 17:54:19.498520   53086 pvc_protection_controller.go:100] Starting PVC protection controller
W1018 17:54:19.579] I1018 17:54:19.498547   53086 shared_informer.go:197] Waiting for caches to sync for PVC protection
W1018 17:54:19.579] I1018 17:54:19.498872   53086 controllermanager.go:534] Started "deployment"
W1018 17:54:19.579] W1018 17:54:19.498900   53086 controllermanager.go:513] "endpointslice" is disabled
W1018 17:54:19.580] I1018 17:54:19.499015   53086 deployment_controller.go:152] Starting deployment controller
W1018 17:54:19.580] I1018 17:54:19.499041   53086 shared_informer.go:197] Waiting for caches to sync for deployment
W1018 17:54:19.580] I1018 17:54:19.499261   53086 controllermanager.go:534] Started "podgc"
W1018 17:54:19.581] I1018 17:54:19.499359   53086 gc_controller.go:75] Starting GC controller
W1018 17:54:19.581] I1018 17:54:19.499388   53086 shared_informer.go:197] Waiting for caches to sync for GC
W1018 17:54:19.581] I1018 17:54:19.499666   53086 controllermanager.go:534] Started "cronjob"
W1018 17:54:19.582] I1018 17:54:19.499955   53086 cronjob_controller.go:96] Starting CronJob Manager
W1018 17:54:19.582] E1018 17:54:19.500172   53086 core.go:79] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1018 17:54:19.582] W1018 17:54:19.500188   53086 controllermanager.go:526] Skipping "service"
W1018 17:54:19.583] I1018 17:54:19.500200   53086 core.go:211] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W1018 17:54:19.583] W1018 17:54:19.500207   53086 controllermanager.go:526] Skipping "route"
W1018 17:54:19.583] W1018 17:54:19.500216   53086 controllermanager.go:526] Skipping "root-ca-cert-publisher"
W1018 17:54:19.584] I1018 17:54:19.500627   53086 controllermanager.go:534] Started "endpoint"
W1018 17:54:19.584] I1018 17:54:19.500809   53086 endpoints_controller.go:175] Starting endpoint controller
... skipping 37 lines ...
W1018 17:54:19.592] I1018 17:54:19.523642   53086 ttl_controller.go:116] Starting TTL controller
W1018 17:54:19.592] I1018 17:54:19.523674   53086 shared_informer.go:197] Waiting for caches to sync for TTL
W1018 17:54:19.592] I1018 17:54:19.523715   53086 expand_controller.go:308] Starting expand controller
W1018 17:54:19.592] I1018 17:54:19.523731   53086 shared_informer.go:197] Waiting for caches to sync for expand
W1018 17:54:19.592] I1018 17:54:19.523782   53086 pv_protection_controller.go:81] Starting PV protection controller
W1018 17:54:19.593] I1018 17:54:19.523796   53086 shared_informer.go:197] Waiting for caches to sync for PV protection
W1018 17:54:19.593] W1018 17:54:19.577915   53086 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W1018 17:54:19.599] I1018 17:54:19.599001   53086 shared_informer.go:204] Caches are synced for PVC protection 
W1018 17:54:19.600] I1018 17:54:19.599738   53086 shared_informer.go:204] Caches are synced for GC 
W1018 17:54:19.601] I1018 17:54:19.601201   53086 shared_informer.go:204] Caches are synced for endpoint 
W1018 17:54:19.602] I1018 17:54:19.602282   53086 shared_informer.go:204] Caches are synced for job 
W1018 17:54:19.602] I1018 17:54:19.602450   53086 shared_informer.go:204] Caches are synced for stateful set 
W1018 17:54:19.603] I1018 17:54:19.603185   53086 shared_informer.go:204] Caches are synced for ReplicaSet 
W1018 17:54:19.603] I1018 17:54:19.603409   53086 shared_informer.go:204] Caches are synced for certificate-csrapproving 
W1018 17:54:19.604] I1018 17:54:19.603946   53086 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W1018 17:54:19.615] E1018 17:54:19.614343   53086 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W1018 17:54:19.617] I1018 17:54:19.616927   53086 shared_informer.go:204] Caches are synced for daemon sets 
W1018 17:54:19.624] I1018 17:54:19.623932   53086 shared_informer.go:204] Caches are synced for TTL 
W1018 17:54:19.626] E1018 17:54:19.626372   53086 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W1018 17:54:19.672] I1018 17:54:19.672434   53086 shared_informer.go:204] Caches are synced for ReplicationController 
W1018 17:54:19.685] I1018 17:54:19.684459   53086 shared_informer.go:204] Caches are synced for taint 
W1018 17:54:19.685] I1018 17:54:19.684573   53086 node_lifecycle_controller.go:1282] Initializing eviction metric for zone: 
W1018 17:54:19.685] I1018 17:54:19.684901   53086 event.go:262] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"ebe43893-4e2e-4df7-a695-f6d52b27f3f2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W1018 17:54:19.686] I1018 17:54:19.684975   53086 node_lifecycle_controller.go:1132] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W1018 17:54:19.686] I1018 17:54:19.684616   53086 taint_manager.go:186] Starting NoExecuteTaintManager
... skipping 84 lines ...
I1018 17:54:23.471] +++ working dir: /go/src/k8s.io/kubernetes
I1018 17:54:23.474] +++ command: run_RESTMapper_evaluation_tests
I1018 17:54:23.488] +++ [1018 17:54:23] Creating namespace namespace-1571421263-1795
I1018 17:54:23.573] namespace/namespace-1571421263-1795 created
I1018 17:54:23.645] Context "test" modified.
I1018 17:54:23.652] +++ [1018 17:54:23] Testing RESTMapper
I1018 17:54:23.764] +++ [1018 17:54:23] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I1018 17:54:23.780] +++ exit code: 0
I1018 17:54:23.901] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I1018 17:54:23.902] bindings                                                                      true         Binding
I1018 17:54:23.902] componentstatuses                 cs                                          false        ComponentStatus
I1018 17:54:23.902] configmaps                        cm                                          true         ConfigMap
I1018 17:54:23.903] endpoints                         ep                                          true         Endpoints
... skipping 317 lines ...
I1018 17:54:37.883] (Bcore.sh:79: Successful get pods/valid-pod {{.metadata.name}}: valid-pod
I1018 17:54:37.977] (Bcore.sh:81: Successful get pods {.items[*].metadata.name}: valid-pod
I1018 17:54:38.076] (Bcore.sh:82: Successful get pod valid-pod {.metadata.name}: valid-pod
I1018 17:54:38.166] (Bcore.sh:83: Successful get pod/valid-pod {.metadata.name}: valid-pod
I1018 17:54:38.250] (Bcore.sh:84: Successful get pods/valid-pod {.metadata.name}: valid-pod
I1018 17:54:38.346] (B
I1018 17:54:38.350] core.sh:86: FAIL!
I1018 17:54:38.351] Describe pods valid-pod
I1018 17:54:38.351]   Expected Match: Name:
I1018 17:54:38.351]   Not found in:
I1018 17:54:38.351] Name:         valid-pod
I1018 17:54:38.351] Namespace:    namespace-1571421276-6391
I1018 17:54:38.352] Priority:     0
... skipping 108 lines ...
I1018 17:54:38.673] QoS Class:        Guaranteed
I1018 17:54:38.674] Node-Selectors:   <none>
I1018 17:54:38.674] Tolerations:      <none>
I1018 17:54:38.674] Events:           <none>
I1018 17:54:38.674] (B
I1018 17:54:38.792] 
I1018 17:54:38.793] FAIL!
I1018 17:54:38.793] Describe pods
I1018 17:54:38.793]   Expected Match: Name:
I1018 17:54:38.794]   Not found in:
I1018 17:54:38.794] Name:         valid-pod
I1018 17:54:38.794] Namespace:    namespace-1571421276-6391
I1018 17:54:38.794] Priority:     0
... skipping 174 lines ...
I1018 17:54:44.962] core.sh:235: Successful get configmap/test-configmap --namespace=test-kubectl-describe-pod {{.metadata.name}}: test-configmap
I1018 17:54:45.045] (Bpoddisruptionbudget.policy/test-pdb-1 created
I1018 17:54:45.141] core.sh:241: Successful get pdb/test-pdb-1 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 2
I1018 17:54:45.214] (Bpoddisruptionbudget.policy/test-pdb-2 created
I1018 17:54:45.308] core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
I1018 17:54:45.385] (Bpoddisruptionbudget.policy/test-pdb-3 created
W1018 17:54:45.486] error: resource(s) were provided, but no name, label selector, or --all flag specified
W1018 17:54:45.486] error: setting 'all' parameter but found a non empty selector. 
W1018 17:54:45.486] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1018 17:54:45.487] I1018 17:54:45.043261   49536 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
I1018 17:54:45.587] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I1018 17:54:45.587] (Bpoddisruptionbudget.policy/test-pdb-4 created
I1018 17:54:45.669] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I1018 17:54:45.848] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:54:46.042] (Bpod/env-test-pod created
W1018 17:54:46.143] error: min-available and max-unavailable cannot be both specified
I1018 17:54:46.244] 
I1018 17:54:46.244] core.sh:264: FAIL!
I1018 17:54:46.245] Describe pods --namespace=test-kubectl-describe-pod env-test-pod
I1018 17:54:46.245]   Expected Match: TEST_CMD_1
I1018 17:54:46.246]   Not found in:
I1018 17:54:46.246] Name:         env-test-pod
I1018 17:54:46.246] Namespace:    test-kubectl-describe-pod
I1018 17:54:46.247] Priority:     0
... skipping 23 lines ...
I1018 17:54:46.253] Tolerations:       <none>
I1018 17:54:46.254] Events:            <none>
I1018 17:54:46.254] (B
I1018 17:54:46.254] 264 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
I1018 17:54:46.254] (B
I1018 17:54:46.263] 
I1018 17:54:46.264] FAIL!
I1018 17:54:46.265] Describe pods --namespace=test-kubectl-describe-pod
I1018 17:54:46.265]   Expected Match: TEST_CMD_1
I1018 17:54:46.265]   Not found in:
I1018 17:54:46.266] Name:         env-test-pod
I1018 17:54:46.266] Namespace:    test-kubectl-describe-pod
I1018 17:54:46.266] Priority:     0
... skipping 35 lines ...
I1018 17:54:46.710] namespace "test-kubectl-describe-pod" deleted
I1018 17:54:51.838] +++ [1018 17:54:51] Creating namespace namespace-1571421291-7571
I1018 17:54:51.938] namespace/namespace-1571421291-7571 created
I1018 17:54:52.031] Context "test" modified.
I1018 17:54:52.136] core.sh:278: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:54:52.347] (Bpod/valid-pod created
W1018 17:54:52.534] error: the path "test/e2e/testing-manifests/kubectl/redis-master-pod.yaml" does not exist
I1018 17:54:52.644] Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: redis-master:valid-pod:, got: valid-pod:
I1018 17:54:52.646] 
I1018 17:54:52.651] core.sh:283: FAIL!
I1018 17:54:52.651] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I1018 17:54:52.651]   Expected: redis-master:valid-pod:
I1018 17:54:52.652]   Got:      valid-pod:
I1018 17:54:52.652] (B
I1018 17:54:52.652] 53 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I1018 17:54:52.652] (B
I1018 17:54:52.747] Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: redis-master:valid-pod:, got: valid-pod:
I1018 17:54:52.749] 
I1018 17:54:52.754] core.sh:287: FAIL!
I1018 17:54:52.754] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I1018 17:54:52.754]   Expected: redis-master:valid-pod:
I1018 17:54:52.755]   Got:      valid-pod:
I1018 17:54:52.755] (B
I1018 17:54:52.755] 53 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I1018 17:54:52.755] (B
... skipping 5 lines ...
I1018 17:54:53.215] core.sh:296: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:54:53.401] (Bpod/valid-pod created
I1018 17:54:53.500] core.sh:300: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1018 17:54:53.591] (Bcore.sh:304: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod:
I1018 17:54:53.669] (Bpod/valid-pod labeled
W1018 17:54:53.771] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1018 17:54:53.771] Error from server (NotFound): pods "redis-master" not found
I1018 17:54:53.872] core.sh:308: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod:new-valid-pod:
I1018 17:54:53.884] (Bcore.sh:312: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod:new-valid-pod:
I1018 17:54:53.968] (Bpod/valid-pod labeled
I1018 17:54:54.069] core.sh:316: Successful get pod valid-pod {{.metadata.labels.emptylabel}}: 
I1018 17:54:54.160] (Bcore.sh:320: Successful get pod valid-pod {{.metadata.annotations.emptyannotation}}: <no value>
I1018 17:54:54.264] (Bpod/valid-pod annotated
... skipping 88 lines ...
I1018 17:55:00.120] (Bpod/valid-pod patched
I1018 17:55:00.216] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I1018 17:55:00.300] (Bpod/valid-pod patched
I1018 17:55:00.413] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I1018 17:55:00.608] (Bpod/valid-pod patched
I1018 17:55:00.714] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1018 17:55:00.938] (B+++ [1018 17:55:00] "kubectl patch with resourceVersion 497" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I1018 17:55:01.237] pod "valid-pod" deleted
I1018 17:55:01.254] pod/valid-pod replaced
I1018 17:55:01.368] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I1018 17:55:01.559] (BSuccessful
I1018 17:55:01.560] message:error: --grace-period must have --force specified
I1018 17:55:01.560] has:\-\-grace-period must have \-\-force specified
I1018 17:55:01.744] Successful
I1018 17:55:01.744] message:error: --timeout must have --force specified
I1018 17:55:01.744] has:\-\-timeout must have \-\-force specified
I1018 17:55:01.937] node/node-v1-test created
W1018 17:55:02.038] W1018 17:55:01.937637   53086 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I1018 17:55:02.139] node/node-v1-test replaced
I1018 17:55:02.239] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I1018 17:55:02.321] (Bnode "node-v1-test" deleted
I1018 17:55:02.429] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I1018 17:55:02.715] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I1018 17:55:03.770] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 21 lines ...
I1018 17:55:03.953]     name: kubernetes-pause
I1018 17:55:03.954] has:localonlyvalue
W1018 17:55:04.054] Edit cancelled, no changes made.
W1018 17:55:04.055] Edit cancelled, no changes made.
W1018 17:55:04.055] Edit cancelled, no changes made.
W1018 17:55:04.055] Edit cancelled, no changes made.
W1018 17:55:04.154] error: 'name' already has a value (valid-pod), and --overwrite is false
I1018 17:55:04.256] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I1018 17:55:04.262] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I1018 17:55:04.354] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I1018 17:55:04.436] (Bpod/valid-pod labeled
I1018 17:55:04.537] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I1018 17:55:04.630] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
... skipping 86 lines ...
I1018 17:55:12.031] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I1018 17:55:12.034] +++ working dir: /go/src/k8s.io/kubernetes
I1018 17:55:12.037] +++ command: run_kubectl_create_error_tests
I1018 17:55:12.049] +++ [1018 17:55:12] Creating namespace namespace-1571421312-3150
I1018 17:55:12.146] namespace/namespace-1571421312-3150 created
I1018 17:55:12.250] Context "test" modified.
I1018 17:55:12.259] +++ [1018 17:55:12] Testing kubectl create with error
W1018 17:55:12.359] Error: must specify one of -f and -k
W1018 17:55:12.360] 
W1018 17:55:12.360] Create a resource from a file or from stdin.
W1018 17:55:12.360] 
W1018 17:55:12.360]  JSON and YAML formats are accepted.
W1018 17:55:12.360] 
W1018 17:55:12.360] Examples:
... skipping 41 lines ...
W1018 17:55:12.365] 
W1018 17:55:12.365] Usage:
W1018 17:55:12.366]   kubectl create -f FILENAME [options]
W1018 17:55:12.366] 
W1018 17:55:12.366] Use "kubectl <command> --help" for more information about a given command.
W1018 17:55:12.366] Use "kubectl options" for a list of global command-line options (applies to all commands).
I1018 17:55:12.517] +++ [1018 17:55:12] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W1018 17:55:12.618] kubectl convert is DEPRECATED and will be removed in a future version.
W1018 17:55:12.619] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1018 17:55:12.719] +++ exit code: 0
I1018 17:55:12.754] Recording: run_kubectl_apply_tests
I1018 17:55:12.755] Running command: run_kubectl_apply_tests
I1018 17:55:12.778] 
... skipping 16 lines ...
I1018 17:55:14.461] apply.sh:289: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I1018 17:55:14.544] (Bpod "test-pod" deleted
I1018 17:55:14.773] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W1018 17:55:15.061] I1018 17:55:15.060627   49536 client.go:357] parsed scheme: "endpoint"
W1018 17:55:15.061] I1018 17:55:15.060671   49536 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1018 17:55:15.066] I1018 17:55:15.065924   49536 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W1018 17:55:15.163] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I1018 17:55:15.263] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I1018 17:55:15.264] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1018 17:55:15.279] +++ exit code: 0
I1018 17:55:15.346] Recording: run_kubectl_run_tests
I1018 17:55:15.347] Running command: run_kubectl_run_tests
I1018 17:55:15.371] 
... skipping 8 lines ...
I1018 17:55:15.782] (Bjob.batch/pi created
W1018 17:55:15.883] kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1018 17:55:15.884] I1018 17:55:15.769035   49536 controller.go:606] quota admission added evaluator for: jobs.batch
W1018 17:55:15.884] I1018 17:55:15.785795   53086 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571421315-19478", Name:"pi", UID:"37050c7b-0a3c-4240-984f-790e989ece53", APIVersion:"batch/v1", ResourceVersion:"505", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-ltt9v
I1018 17:55:15.985] run.sh:33: Successful get jobs {{range.items}}{{.metadata.name}}:{{end}}: pi:
I1018 17:55:16.006] (B
I1018 17:55:16.007] FAIL!
I1018 17:55:16.007] Describe pods
I1018 17:55:16.008]   Expected Match: Name:
I1018 17:55:16.008]   Not found in:
I1018 17:55:16.008] Name:           pi-ltt9v
I1018 17:55:16.008] Namespace:      namespace-1571421315-19478
I1018 17:55:16.009] Priority:       0
... skipping 83 lines ...
I1018 17:55:18.679] Context "test" modified.
I1018 17:55:18.695] +++ [1018 17:55:18] Testing kubectl create filter
I1018 17:55:18.877] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:55:19.269] (Bpod/selector-test-pod created
I1018 17:55:19.462] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I1018 17:55:19.624] (BSuccessful
I1018 17:55:19.625] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I1018 17:55:19.626] has:pods "selector-test-pod-dont-apply" not found
I1018 17:55:19.777] pod "selector-test-pod" deleted
I1018 17:55:19.813] +++ exit code: 0
I1018 17:55:19.866] Recording: run_kubectl_apply_deployments_tests
I1018 17:55:19.867] Running command: run_kubectl_apply_deployments_tests
I1018 17:55:19.908] 
... skipping 29 lines ...
W1018 17:55:24.215] I1018 17:55:24.121692   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421319-24237", Name:"nginx", UID:"1e854928-accb-4ba6-80bd-66220710a418", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W1018 17:55:24.216] I1018 17:55:24.129443   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421319-24237", Name:"nginx-8484dd655", UID:"cb26c7df-9059-4830-857d-ecfb920e8145", APIVersion:"apps/v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-fbv56
W1018 17:55:24.217] I1018 17:55:24.137461   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421319-24237", Name:"nginx-8484dd655", UID:"cb26c7df-9059-4830-857d-ecfb920e8145", APIVersion:"apps/v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-gkprc
W1018 17:55:24.218] I1018 17:55:24.140645   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421319-24237", Name:"nginx-8484dd655", UID:"cb26c7df-9059-4830-857d-ecfb920e8145", APIVersion:"apps/v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-dv2r8
I1018 17:55:24.348] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I1018 17:55:28.692] (BSuccessful
I1018 17:55:28.692] message:Error from server (Conflict): error when applying patch:
I1018 17:55:28.693] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1571421319-24237\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I1018 17:55:28.693] to:
I1018 17:55:28.694] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I1018 17:55:28.694] Name: "nginx", Namespace: "namespace-1571421319-24237"
I1018 17:55:28.696] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1571421319-24237\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-10-18T17:55:24Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1571421319-24237" "resourceVersion":"595" "selfLink":"/apis/apps/v1/namespaces/namespace-1571421319-24237/deployments/nginx" "uid":"1e854928-accb-4ba6-80bd-66220710a418"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-10-18T17:55:24Z" "lastUpdateTime":"2019-10-18T17:55:24Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-10-18T17:55:24Z" "lastUpdateTime":"2019-10-18T17:55:24Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I1018 17:55:28.696] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I1018 17:55:28.697] has:Error from server (Conflict)
W1018 17:55:28.797] I1018 17:55:26.361854   53086 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1571421309-28576
W1018 17:55:32.989] E1018 17:55:32.988908   53086 replica_set.go:488] Sync "namespace-1571421319-24237/nginx-8484dd655" failed with replicasets.apps "nginx-8484dd655" not found
I1018 17:55:33.969] deployment.apps/nginx configured
I1018 17:55:34.066] Successful
I1018 17:55:34.066] message:        "name": "nginx2"
I1018 17:55:34.067]           "name": "nginx2"
I1018 17:55:34.067] has:"name": "nginx2"
W1018 17:55:34.168] I1018 17:55:33.973560   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421319-24237", Name:"nginx", UID:"bfb124fb-a077-43e2-b1cb-aa520a30a6c3", APIVersion:"apps/v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
... skipping 141 lines ...
I1018 17:55:41.493] +++ [1018 17:55:41] Creating namespace namespace-1571421341-13705
I1018 17:55:41.592] namespace/namespace-1571421341-13705 created
I1018 17:55:41.678] Context "test" modified.
I1018 17:55:41.685] +++ [1018 17:55:41] Testing kubectl get
I1018 17:55:41.791] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:55:41.887] (BSuccessful
I1018 17:55:41.888] message:Error from server (NotFound): pods "abc" not found
I1018 17:55:41.888] has:pods "abc" not found
I1018 17:55:42.003] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:55:42.094] (BSuccessful
I1018 17:55:42.094] message:Error from server (NotFound): pods "abc" not found
I1018 17:55:42.094] has:pods "abc" not found
I1018 17:55:42.196] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:55:42.297] (BSuccessful
I1018 17:55:42.297] message:{
I1018 17:55:42.298]     "apiVersion": "v1",
I1018 17:55:42.298]     "items": [],
... skipping 23 lines ...
I1018 17:55:42.678] has not:No resources found
I1018 17:55:42.765] Successful
I1018 17:55:42.765] message:NAME
I1018 17:55:42.765] has not:No resources found
I1018 17:55:42.858] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:55:42.964] (BSuccessful
I1018 17:55:42.965] message:error: the server doesn't have a resource type "foobar"
I1018 17:55:42.965] has not:No resources found
I1018 17:55:43.055] Successful
I1018 17:55:43.055] message:No resources found in namespace-1571421341-13705 namespace.
I1018 17:55:43.056] has:No resources found
I1018 17:55:43.155] Successful
I1018 17:55:43.156] message:
I1018 17:55:43.156] has not:No resources found
I1018 17:55:43.253] Successful
I1018 17:55:43.254] message:No resources found in namespace-1571421341-13705 namespace.
I1018 17:55:43.254] has:No resources found
I1018 17:55:43.348] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:55:43.437] (BSuccessful
I1018 17:55:43.438] message:Error from server (NotFound): pods "abc" not found
I1018 17:55:43.438] has:pods "abc" not found
I1018 17:55:43.440] FAIL!
I1018 17:55:43.440] message:Error from server (NotFound): pods "abc" not found
I1018 17:55:43.440] has not:List
I1018 17:55:43.440] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I1018 17:55:43.566] Successful
I1018 17:55:43.567] message:I1018 17:55:43.517708   62864 loader.go:375] Config loaded from file:  /tmp/tmp.pL34uig2M7/.kube/config
I1018 17:55:43.567] I1018 17:55:43.519273   62864 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
I1018 17:55:43.567] I1018 17:55:43.543508   62864 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
... skipping 660 lines ...
I1018 17:55:49.152] Successful
I1018 17:55:49.153] message:NAME    DATA   AGE
I1018 17:55:49.153] one     0      1s
I1018 17:55:49.153] three   0      0s
I1018 17:55:49.153] two     0      1s
I1018 17:55:49.153] STATUS    REASON          MESSAGE
I1018 17:55:49.153] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1018 17:55:49.153] has not:watch is only supported on individual resources
I1018 17:55:50.241] Successful
I1018 17:55:50.241] message:STATUS    REASON          MESSAGE
I1018 17:55:50.241] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1018 17:55:50.242] has not:watch is only supported on individual resources
I1018 17:55:50.247] +++ [1018 17:55:50] Creating namespace namespace-1571421350-27629
I1018 17:55:50.317] namespace/namespace-1571421350-27629 created
I1018 17:55:50.412] Context "test" modified.
I1018 17:55:50.535] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:55:50.693] (Bpod/valid-pod created
... skipping 56 lines ...
I1018 17:55:50.802] }
I1018 17:55:50.913] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1018 17:55:51.197] (B<no value>Successful
I1018 17:55:51.197] message:valid-pod:
I1018 17:55:51.198] has:valid-pod:
I1018 17:55:51.291] Successful
I1018 17:55:51.291] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I1018 17:55:51.292] 	template was:
I1018 17:55:51.292] 		{.missing}
I1018 17:55:51.292] 	object given to jsonpath engine was:
I1018 17:55:51.293] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-10-18T17:55:50Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1571421350-27629", "resourceVersion":"698", "selfLink":"/api/v1/namespaces/namespace-1571421350-27629/pods/valid-pod", "uid":"8fc5840c-ffb5-4434-96f9-c12b5635b71e"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I1018 17:55:51.294] has:missing is not found
I1018 17:55:51.386] Successful
I1018 17:55:51.386] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I1018 17:55:51.386] 	template was:
I1018 17:55:51.386] 		{{.missing}}
I1018 17:55:51.387] 	raw data was:
I1018 17:55:51.387] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-10-18T17:55:50Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1571421350-27629","resourceVersion":"698","selfLink":"/api/v1/namespaces/namespace-1571421350-27629/pods/valid-pod","uid":"8fc5840c-ffb5-4434-96f9-c12b5635b71e"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I1018 17:55:51.388] 	object given to template engine was:
I1018 17:55:51.388] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-10-18T17:55:50Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1571421350-27629 resourceVersion:698 selfLink:/api/v1/namespaces/namespace-1571421350-27629/pods/valid-pod uid:8fc5840c-ffb5-4434-96f9-c12b5635b71e] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I1018 17:55:51.388] has:map has no entry for key "missing"
W1018 17:55:51.489] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I1018 17:55:52.481] Successful
I1018 17:55:52.482] message:NAME        READY   STATUS    RESTARTS   AGE
I1018 17:55:52.482] valid-pod   0/1     Pending   0          1s
I1018 17:55:52.482] STATUS      REASON          MESSAGE
I1018 17:55:52.482] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1018 17:55:52.483] has:STATUS
I1018 17:55:52.484] Successful
I1018 17:55:52.484] message:NAME        READY   STATUS    RESTARTS   AGE
I1018 17:55:52.484] valid-pod   0/1     Pending   0          1s
I1018 17:55:52.484] STATUS      REASON          MESSAGE
I1018 17:55:52.484] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1018 17:55:52.485] has:valid-pod
I1018 17:55:53.568] Successful
I1018 17:55:53.569] message:pod/valid-pod
I1018 17:55:53.569] has not:STATUS
I1018 17:55:53.571] Successful
I1018 17:55:53.571] message:pod/valid-pod
... skipping 72 lines ...
I1018 17:55:54.676] status:
I1018 17:55:54.676]   phase: Pending
I1018 17:55:54.676]   qosClass: Guaranteed
I1018 17:55:54.676] ---
I1018 17:55:54.676] has:name: valid-pod
I1018 17:55:54.751] Successful
I1018 17:55:54.752] message:Error from server (NotFound): pods "invalid-pod" not found
I1018 17:55:54.752] has:"invalid-pod" not found
I1018 17:55:54.834] pod "valid-pod" deleted
I1018 17:55:54.939] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:55:55.098] (Bpod/redis-master created
I1018 17:55:55.102] pod/valid-pod created
I1018 17:55:55.206] Successful
... skipping 35 lines ...
I1018 17:55:56.381] +++ command: run_kubectl_exec_pod_tests
I1018 17:55:56.393] +++ [1018 17:55:56] Creating namespace namespace-1571421356-10370
I1018 17:55:56.467] namespace/namespace-1571421356-10370 created
I1018 17:55:56.541] Context "test" modified.
I1018 17:55:56.548] +++ [1018 17:55:56] Testing kubectl exec POD COMMAND
I1018 17:55:56.638] Successful
I1018 17:55:56.638] message:Error from server (NotFound): pods "abc" not found
I1018 17:55:56.639] has:pods "abc" not found
I1018 17:55:56.810] pod/test-pod created
I1018 17:55:56.914] Successful
I1018 17:55:56.915] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1018 17:55:56.915] has not:pods "test-pod" not found
I1018 17:55:56.916] Successful
I1018 17:55:56.916] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1018 17:55:56.917] has not:pod or type/name must be specified
I1018 17:55:57.003] pod "test-pod" deleted
I1018 17:55:57.026] +++ exit code: 0
I1018 17:55:57.118] Recording: run_kubectl_exec_resource_name_tests
I1018 17:55:57.118] Running command: run_kubectl_exec_resource_name_tests
I1018 17:55:57.143] 
... skipping 2 lines ...
I1018 17:55:57.151] +++ command: run_kubectl_exec_resource_name_tests
I1018 17:55:57.163] +++ [1018 17:55:57] Creating namespace namespace-1571421357-21196
I1018 17:55:57.241] namespace/namespace-1571421357-21196 created
I1018 17:55:57.318] Context "test" modified.
I1018 17:55:57.325] +++ [1018 17:55:57] Testing kubectl exec TYPE/NAME COMMAND
I1018 17:55:57.423] Successful
I1018 17:55:57.423] message:error: the server doesn't have a resource type "foo"
I1018 17:55:57.424] has:error:
I1018 17:55:57.510] Successful
I1018 17:55:57.511] message:Error from server (NotFound): deployments.apps "bar" not found
I1018 17:55:57.511] has:"bar" not found
I1018 17:55:57.674] pod/test-pod created
I1018 17:55:57.840] replicaset.apps/frontend created
W1018 17:55:57.941] I1018 17:55:57.853102   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421357-21196", Name:"frontend", UID:"ed61eeb0-1955-4484-9305-40dbb9398cec", APIVersion:"apps/v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rzlz4
W1018 17:55:57.942] I1018 17:55:57.855652   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421357-21196", Name:"frontend", UID:"ed61eeb0-1955-4484-9305-40dbb9398cec", APIVersion:"apps/v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nw49r
W1018 17:55:57.943] I1018 17:55:57.856167   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421357-21196", Name:"frontend", UID:"ed61eeb0-1955-4484-9305-40dbb9398cec", APIVersion:"apps/v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mwdn9
I1018 17:55:58.043] configmap/test-set-env-config created
I1018 17:55:58.122] Successful
I1018 17:55:58.122] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I1018 17:55:58.122] has:not implemented
I1018 17:55:58.212] Successful
I1018 17:55:58.213] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1018 17:55:58.213] has not:not found
I1018 17:55:58.214] Successful
I1018 17:55:58.214] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I1018 17:55:58.215] has not:pod or type/name must be specified
I1018 17:55:58.317] Successful
I1018 17:55:58.318] message:Error from server (BadRequest): pod frontend-mwdn9 does not have a host assigned
I1018 17:55:58.318] has not:not found
I1018 17:55:58.320] Successful
I1018 17:55:58.320] message:Error from server (BadRequest): pod frontend-mwdn9 does not have a host assigned
I1018 17:55:58.320] has not:pod or type/name must be specified
I1018 17:55:58.399] pod "test-pod" deleted
I1018 17:55:58.479] replicaset.apps "frontend" deleted
I1018 17:55:58.562] configmap "test-set-env-config" deleted
I1018 17:55:58.581] +++ exit code: 0
I1018 17:55:58.618] Recording: run_create_secret_tests
I1018 17:55:58.618] Running command: run_create_secret_tests
I1018 17:55:58.641] 
I1018 17:55:58.644] +++ Running case: test-cmd.run_create_secret_tests 
I1018 17:55:58.646] +++ working dir: /go/src/k8s.io/kubernetes
I1018 17:55:58.649] +++ command: run_create_secret_tests
I1018 17:55:58.746] Successful
I1018 17:55:58.746] message:Error from server (NotFound): secrets "mysecret" not found
I1018 17:55:58.746] has:secrets "mysecret" not found
I1018 17:55:58.902] Successful
I1018 17:55:58.903] message:Error from server (NotFound): secrets "mysecret" not found
I1018 17:55:58.903] has:secrets "mysecret" not found
I1018 17:55:58.904] Successful
I1018 17:55:58.904] message:user-specified
I1018 17:55:58.905] has:user-specified
I1018 17:55:58.976] Successful
I1018 17:55:59.052] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"fe05ff2a-7af8-4f40-b9a4-50e592f75dc3","resourceVersion":"773","creationTimestamp":"2019-10-18T17:55:59Z"}}
... skipping 2 lines ...
I1018 17:55:59.229] has:uid
I1018 17:55:59.303] Successful
I1018 17:55:59.304] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"fe05ff2a-7af8-4f40-b9a4-50e592f75dc3","resourceVersion":"774","creationTimestamp":"2019-10-18T17:55:59Z"},"data":{"key1":"config1"}}
I1018 17:55:59.304] has:config1
I1018 17:55:59.371] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"fe05ff2a-7af8-4f40-b9a4-50e592f75dc3"}}
I1018 17:55:59.456] Successful
I1018 17:55:59.457] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I1018 17:55:59.457] has:configmaps "tester-update-cm" not found
I1018 17:55:59.471] +++ exit code: 0
I1018 17:55:59.504] Recording: run_kubectl_create_kustomization_directory_tests
I1018 17:55:59.505] Running command: run_kubectl_create_kustomization_directory_tests
I1018 17:55:59.528] 
I1018 17:55:59.530] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
I1018 17:56:02.300] valid-pod   0/1     Pending   0          0s
I1018 17:56:02.300] has:valid-pod
I1018 17:56:03.386] Successful
I1018 17:56:03.387] message:NAME        READY   STATUS    RESTARTS   AGE
I1018 17:56:03.387] valid-pod   0/1     Pending   0          0s
I1018 17:56:03.387] STATUS      REASON          MESSAGE
I1018 17:56:03.387] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I1018 17:56:03.387] has:Timeout exceeded while reading body
I1018 17:56:03.481] Successful
I1018 17:56:03.481] message:NAME        READY   STATUS    RESTARTS   AGE
I1018 17:56:03.482] valid-pod   0/1     Pending   0          1s
I1018 17:56:03.482] has:valid-pod
I1018 17:56:03.552] Successful
I1018 17:56:03.553] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I1018 17:56:03.553] has:Invalid timeout value
I1018 17:56:03.638] pod "valid-pod" deleted
I1018 17:56:03.661] +++ exit code: 0
I1018 17:56:03.702] Recording: run_crd_tests
I1018 17:56:03.702] Running command: run_crd_tests
I1018 17:56:03.727] 
... skipping 158 lines ...
I1018 17:56:10.309] foo.company.com/test patched
I1018 17:56:10.449] crd.sh:236: Successful get foos/test {{.patched}}: value1
I1018 17:56:10.572] (Bfoo.company.com/test patched
I1018 17:56:10.720] crd.sh:238: Successful get foos/test {{.patched}}: value2
I1018 17:56:10.844] (Bfoo.company.com/test patched
I1018 17:56:10.987] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I1018 17:56:11.240] (B+++ [1018 17:56:11] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I1018 17:56:11.340] {
I1018 17:56:11.341]     "apiVersion": "company.com/v1",
I1018 17:56:11.341]     "kind": "Foo",
I1018 17:56:11.341]     "metadata": {
I1018 17:56:11.341]         "annotations": {
I1018 17:56:11.341]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 190 lines ...
I1018 17:56:28.566] (Bnamespace/non-native-resources created
I1018 17:56:28.747] bar.company.com/test created
I1018 17:56:28.850] crd.sh:455: Successful get bars {{len .items}}: 1
I1018 17:56:28.929] (Bnamespace "non-native-resources" deleted
I1018 17:56:34.143] crd.sh:458: Successful get bars {{len .items}}: 0
I1018 17:56:34.320] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W1018 17:56:34.421] Error from server (NotFound): namespaces "non-native-resources" not found
I1018 17:56:34.522] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I1018 17:56:34.529] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I1018 17:56:34.634] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I1018 17:56:34.663] +++ exit code: 0
I1018 17:56:35.013] Recording: run_cmd_with_img_tests
I1018 17:56:35.014] Running command: run_cmd_with_img_tests
... skipping 3 lines ...
I1018 17:56:35.046] +++ command: run_cmd_with_img_tests
I1018 17:56:35.057] +++ [1018 17:56:35] Creating namespace namespace-1571421395-6041
I1018 17:56:35.154] namespace/namespace-1571421395-6041 created
I1018 17:56:35.235] Context "test" modified.
I1018 17:56:35.243] +++ [1018 17:56:35] Testing cmd with image
W1018 17:56:35.343] W1018 17:56:35.328509   49536 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1018 17:56:35.344] E1018 17:56:35.331369   53086 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:35.347] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1018 17:56:35.349] I1018 17:56:35.348609   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421395-6041", Name:"test1", UID:"4edd9caa-eaa2-4ba1-b0e4-a800b20bf862", APIVersion:"apps/v1", ResourceVersion:"926", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-6cdffdb5b8 to 1
W1018 17:56:35.353] I1018 17:56:35.353268   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421395-6041", Name:"test1-6cdffdb5b8", UID:"97df6312-7ef6-47d4-a34f-feb870de7b15", APIVersion:"apps/v1", ResourceVersion:"927", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-8zcxs
W1018 17:56:35.430] W1018 17:56:35.429929   49536 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1018 17:56:35.433] E1018 17:56:35.432497   53086 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:56:35.533] Successful
I1018 17:56:35.534] message:deployment.apps/test1 created
I1018 17:56:35.534] has:deployment.apps/test1 created
I1018 17:56:35.534] deployment.apps "test1" deleted
I1018 17:56:35.553] Successful
I1018 17:56:35.553] message:error: Invalid image name "InvalidImageName": invalid reference format
I1018 17:56:35.553] has:error: Invalid image name "InvalidImageName": invalid reference format
I1018 17:56:35.568] +++ exit code: 0
W1018 17:56:35.669] W1018 17:56:35.536233   49536 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1018 17:56:35.669] E1018 17:56:35.537901   53086 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:35.669] W1018 17:56:35.641920   49536 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W1018 17:56:35.670] E1018 17:56:35.643831   53086 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:56:35.770] +++ [1018 17:56:35] Testing recursive resources
I1018 17:56:35.771] +++ [1018 17:56:35] Creating namespace namespace-1571421395-13075
I1018 17:56:35.815] namespace/namespace-1571421395-13075 created
I1018 17:56:35.896] Context "test" modified.
I1018 17:56:35.992] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:56:36.323] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:36.326] (BSuccessful
I1018 17:56:36.327] message:pod/busybox0 created
I1018 17:56:36.327] pod/busybox1 created
I1018 17:56:36.328] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1018 17:56:36.328] has:error validating data: kind not set
I1018 17:56:36.421] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:36.607] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I1018 17:56:36.612] (BSuccessful
I1018 17:56:36.613] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1018 17:56:36.613] has:Object 'Kind' is missing
I1018 17:56:36.713] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:36.980] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1018 17:56:36.983] (BSuccessful
I1018 17:56:36.984] message:pod/busybox0 replaced
I1018 17:56:36.984] pod/busybox1 replaced
I1018 17:56:36.984] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1018 17:56:36.985] has:error validating data: kind not set
I1018 17:56:37.079] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:37.182] (BSuccessful
I1018 17:56:37.182] message:Name:         busybox0
I1018 17:56:37.183] Namespace:    namespace-1571421395-13075
I1018 17:56:37.183] Priority:     0
I1018 17:56:37.183] Node:         <none>
... skipping 159 lines ...
I1018 17:56:37.204] has:Object 'Kind' is missing
I1018 17:56:37.279] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:37.481] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I1018 17:56:37.485] (BSuccessful
I1018 17:56:37.485] message:pod/busybox0 annotated
I1018 17:56:37.485] pod/busybox1 annotated
I1018 17:56:37.486] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1018 17:56:37.487] has:Object 'Kind' is missing
I1018 17:56:37.582] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:37.857] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I1018 17:56:37.860] (BSuccessful
I1018 17:56:37.860] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1018 17:56:37.861] pod/busybox0 configured
I1018 17:56:37.861] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I1018 17:56:37.861] pod/busybox1 configured
I1018 17:56:37.861] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1018 17:56:37.861] has:error validating data: kind not set
I1018 17:56:37.963] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:56:38.134] (Bdeployment.apps/nginx created
W1018 17:56:38.235] E1018 17:56:36.333046   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:38.235] E1018 17:56:36.434130   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:38.236] E1018 17:56:36.539430   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:38.236] E1018 17:56:36.645571   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:38.237] E1018 17:56:37.334448   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:38.238] E1018 17:56:37.435684   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:38.238] E1018 17:56:37.540867   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:38.238] E1018 17:56:37.647230   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:38.239] I1018 17:56:38.138618   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421395-13075", Name:"nginx", UID:"f91689ed-4057-4eea-81d9-0103e238042b", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W1018 17:56:38.239] I1018 17:56:38.142294   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421395-13075", Name:"nginx-f87d999f7", UID:"bcecce7c-72d4-4bdb-ae44-5708a13e147b", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-h8v28
W1018 17:56:38.240] I1018 17:56:38.145424   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421395-13075", Name:"nginx-f87d999f7", UID:"bcecce7c-72d4-4bdb-ae44-5708a13e147b", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-fbq7r
W1018 17:56:38.240] I1018 17:56:38.145854   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421395-13075", Name:"nginx-f87d999f7", UID:"bcecce7c-72d4-4bdb-ae44-5708a13e147b", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-rrnp4
W1018 17:56:38.336] E1018 17:56:38.336168   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:38.404] kubectl convert is DEPRECATED and will be removed in a future version.
W1018 17:56:38.404] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W1018 17:56:38.437] E1018 17:56:38.437022   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:56:38.538] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1018 17:56:38.538] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1018 17:56:38.539] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I1018 17:56:38.539] (BSuccessful
I1018 17:56:38.539] message:apiVersion: extensions/v1beta1
I1018 17:56:38.539] kind: Deployment
... skipping 40 lines ...
I1018 17:56:38.596] deployment.apps "nginx" deleted
I1018 17:56:38.695] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:38.876] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:38.878] (BSuccessful
I1018 17:56:38.879] message:kubectl convert is DEPRECATED and will be removed in a future version.
I1018 17:56:38.879] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1018 17:56:38.880] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1018 17:56:38.880] has:Object 'Kind' is missing
I1018 17:56:38.980] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:39.069] (BSuccessful
I1018 17:56:39.070] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1018 17:56:39.070] has:busybox0:busybox1:
I1018 17:56:39.073] Successful
I1018 17:56:39.073] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1018 17:56:39.073] has:Object 'Kind' is missing
I1018 17:56:39.169] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:39.267] (Bpod/busybox0 labeled
I1018 17:56:39.267] pod/busybox1 labeled
I1018 17:56:39.268] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1018 17:56:39.359] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I1018 17:56:39.363] (BSuccessful
I1018 17:56:39.363] message:pod/busybox0 labeled
I1018 17:56:39.363] pod/busybox1 labeled
I1018 17:56:39.363] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1018 17:56:39.364] has:Object 'Kind' is missing
I1018 17:56:39.454] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:39.540] (Bpod/busybox0 patched
I1018 17:56:39.541] pod/busybox1 patched
I1018 17:56:39.541] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1018 17:56:39.638] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I1018 17:56:39.641] (BSuccessful
I1018 17:56:39.641] message:pod/busybox0 patched
I1018 17:56:39.641] pod/busybox1 patched
I1018 17:56:39.642] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1018 17:56:39.642] has:Object 'Kind' is missing
I1018 17:56:39.737] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:39.927] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:56:39.929] (BSuccessful
I1018 17:56:39.930] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1018 17:56:39.930] pod "busybox0" force deleted
I1018 17:56:39.930] pod "busybox1" force deleted
I1018 17:56:39.930] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I1018 17:56:39.930] has:Object 'Kind' is missing
I1018 17:56:40.026] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:56:40.190] (Breplicationcontroller/busybox0 created
I1018 17:56:40.194] replicationcontroller/busybox1 created
I1018 17:56:40.298] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:40.418] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:40.513] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I1018 17:56:40.609] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I1018 17:56:40.822] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1018 17:56:40.918] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I1018 17:56:40.921] (BSuccessful
I1018 17:56:40.922] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I1018 17:56:40.922] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I1018 17:56:40.922] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1018 17:56:40.923] has:Object 'Kind' is missing
I1018 17:56:41.021] horizontalpodautoscaler.autoscaling "busybox0" deleted
I1018 17:56:41.116] horizontalpodautoscaler.autoscaling "busybox1" deleted
I1018 17:56:41.219] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:41.316] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I1018 17:56:41.418] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I1018 17:56:41.631] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1018 17:56:41.737] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I1018 17:56:41.740] (BSuccessful
I1018 17:56:41.740] message:service/busybox0 exposed
I1018 17:56:41.740] service/busybox1 exposed
I1018 17:56:41.741] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1018 17:56:41.741] has:Object 'Kind' is missing
I1018 17:56:41.841] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:41.942] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I1018 17:56:42.044] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I1018 17:56:42.273] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I1018 17:56:42.368] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I1018 17:56:42.370] (BSuccessful
I1018 17:56:42.371] message:replicationcontroller/busybox0 scaled
I1018 17:56:42.371] replicationcontroller/busybox1 scaled
I1018 17:56:42.372] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1018 17:56:42.372] has:Object 'Kind' is missing
I1018 17:56:42.466] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:42.646] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:56:42.648] (BSuccessful
I1018 17:56:42.648] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I1018 17:56:42.649] replicationcontroller "busybox0" force deleted
I1018 17:56:42.649] replicationcontroller "busybox1" force deleted
I1018 17:56:42.650] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1018 17:56:42.650] has:Object 'Kind' is missing
I1018 17:56:42.739] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:56:42.903] (Bdeployment.apps/nginx1-deployment created
I1018 17:56:42.914] deployment.apps/nginx0-deployment created
W1018 17:56:43.015] E1018 17:56:38.542266   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.016] E1018 17:56:38.648512   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.016] I1018 17:56:39.033806   53086 namespace_controller.go:185] Namespace has been deleted non-native-resources
W1018 17:56:43.017] E1018 17:56:39.337910   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.017] E1018 17:56:39.438654   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.018] E1018 17:56:39.543660   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.018] E1018 17:56:39.649937   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.019] I1018 17:56:40.194230   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421395-13075", Name:"busybox0", UID:"4bb73e8b-f5c0-4bdb-a937-f77509c85fb9", APIVersion:"v1", ResourceVersion:"983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-9sbxx
W1018 17:56:43.020] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1018 17:56:43.020] I1018 17:56:40.199274   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421395-13075", Name:"busybox1", UID:"3f9490d9-89f6-4c97-9035-16062b3daf6c", APIVersion:"v1", ResourceVersion:"985", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-g27r8
W1018 17:56:43.021] E1018 17:56:40.345665   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.021] E1018 17:56:40.440071   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.021] E1018 17:56:40.545492   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.022] E1018 17:56:40.651217   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.022] E1018 17:56:41.347358   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.022] E1018 17:56:41.441544   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.023] E1018 17:56:41.546925   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.023] E1018 17:56:41.653050   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.023] I1018 17:56:42.145171   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421395-13075", Name:"busybox0", UID:"4bb73e8b-f5c0-4bdb-a937-f77509c85fb9", APIVersion:"v1", ResourceVersion:"1004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-mkhb5
W1018 17:56:43.024] I1018 17:56:42.156587   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421395-13075", Name:"busybox1", UID:"3f9490d9-89f6-4c97-9035-16062b3daf6c", APIVersion:"v1", ResourceVersion:"1008", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-627rx
W1018 17:56:43.024] E1018 17:56:42.348935   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.024] E1018 17:56:42.443175   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.025] E1018 17:56:42.549112   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.025] E1018 17:56:42.655100   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.025] I1018 17:56:42.910205   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421395-13075", Name:"nginx1-deployment", UID:"31c86c1f-ee39-4aa4-bc70-e85051b056e5", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W1018 17:56:43.025] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1018 17:56:43.026] I1018 17:56:42.917394   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421395-13075", Name:"nginx0-deployment", UID:"87a41714-dfe9-454f-b3d4-1ec633958283", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W1018 17:56:43.026] I1018 17:56:42.917464   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421395-13075", Name:"nginx1-deployment-7bdbbfb5cf", UID:"f31cae0b-d12c-4f38-b23e-3058e4012f6a", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-55r6p
W1018 17:56:43.026] I1018 17:56:42.919591   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421395-13075", Name:"nginx0-deployment-57c6bff7f6", UID:"942c8f13-a992-46ba-aec5-828191ff23fc", APIVersion:"apps/v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-5stpx
W1018 17:56:43.027] I1018 17:56:42.925670   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421395-13075", Name:"nginx1-deployment-7bdbbfb5cf", UID:"f31cae0b-d12c-4f38-b23e-3058e4012f6a", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-mqmql
W1018 17:56:43.027] I1018 17:56:42.925963   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421395-13075", Name:"nginx0-deployment-57c6bff7f6", UID:"942c8f13-a992-46ba-aec5-828191ff23fc", APIVersion:"apps/v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-cwvkm
I1018 17:56:43.128] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I1018 17:56:43.128] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1018 17:56:43.329] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I1018 17:56:43.332] (BSuccessful
I1018 17:56:43.332] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I1018 17:56:43.332] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I1018 17:56:43.333] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1018 17:56:43.333] has:Object 'Kind' is missing
W1018 17:56:43.434] E1018 17:56:43.350149   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.445] E1018 17:56:43.444731   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:56:43.545] deployment.apps/nginx1-deployment paused
I1018 17:56:43.546] deployment.apps/nginx0-deployment paused
I1018 17:56:43.548] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I1018 17:56:43.550] (BSuccessful
I1018 17:56:43.551] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1018 17:56:43.552] has:Object 'Kind' is missing
I1018 17:56:43.643] deployment.apps/nginx1-deployment resumed
I1018 17:56:43.646] deployment.apps/nginx0-deployment resumed
I1018 17:56:43.749] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I1018 17:56:43.751] (BSuccessful
I1018 17:56:43.752] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1018 17:56:43.753] has:Object 'Kind' is missing
W1018 17:56:43.853] E1018 17:56:43.551488   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.854] E1018 17:56:43.656027   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:43.932] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1018 17:56:43.949] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1018 17:56:44.050] Successful
I1018 17:56:44.050] message:deployment.apps/nginx1-deployment 
I1018 17:56:44.050] REVISION  CHANGE-CAUSE
I1018 17:56:44.051] 1         <none>
I1018 17:56:44.051] 
I1018 17:56:44.051] deployment.apps/nginx0-deployment 
I1018 17:56:44.051] REVISION  CHANGE-CAUSE
I1018 17:56:44.051] 1         <none>
I1018 17:56:44.051] 
I1018 17:56:44.052] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1018 17:56:44.052] has:nginx0-deployment
I1018 17:56:44.052] Successful
I1018 17:56:44.052] message:deployment.apps/nginx1-deployment 
I1018 17:56:44.052] REVISION  CHANGE-CAUSE
I1018 17:56:44.053] 1         <none>
I1018 17:56:44.053] 
I1018 17:56:44.053] deployment.apps/nginx0-deployment 
I1018 17:56:44.053] REVISION  CHANGE-CAUSE
I1018 17:56:44.053] 1         <none>
I1018 17:56:44.053] 
I1018 17:56:44.054] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1018 17:56:44.054] has:nginx1-deployment
I1018 17:56:44.054] Successful
I1018 17:56:44.054] message:deployment.apps/nginx1-deployment 
I1018 17:56:44.054] REVISION  CHANGE-CAUSE
I1018 17:56:44.054] 1         <none>
I1018 17:56:44.054] 
I1018 17:56:44.055] deployment.apps/nginx0-deployment 
I1018 17:56:44.055] REVISION  CHANGE-CAUSE
I1018 17:56:44.055] 1         <none>
I1018 17:56:44.055] 
I1018 17:56:44.055] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I1018 17:56:44.056] has:Object 'Kind' is missing
I1018 17:56:44.056] deployment.apps "nginx1-deployment" force deleted
I1018 17:56:44.056] deployment.apps "nginx0-deployment" force deleted
W1018 17:56:44.352] E1018 17:56:44.351710   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:44.447] E1018 17:56:44.446449   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:44.553] E1018 17:56:44.553227   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:44.658] E1018 17:56:44.657664   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:56:45.047] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:56:45.198] (Breplicationcontroller/busybox0 created
I1018 17:56:45.203] replicationcontroller/busybox1 created
I1018 17:56:45.305] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I1018 17:56:45.407] (BSuccessful
I1018 17:56:45.408] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I1018 17:56:45.411] message:no rollbacker has been implemented for "ReplicationController"
I1018 17:56:45.411] no rollbacker has been implemented for "ReplicationController"
I1018 17:56:45.411] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1018 17:56:45.412] has:Object 'Kind' is missing
I1018 17:56:45.508] Successful
I1018 17:56:45.509] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1018 17:56:45.509] error: replicationcontrollers "busybox0" pausing is not supported
I1018 17:56:45.509] error: replicationcontrollers "busybox1" pausing is not supported
I1018 17:56:45.509] has:Object 'Kind' is missing
I1018 17:56:45.512] Successful
I1018 17:56:45.512] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1018 17:56:45.512] error: replicationcontrollers "busybox0" pausing is not supported
I1018 17:56:45.512] error: replicationcontrollers "busybox1" pausing is not supported
I1018 17:56:45.512] has:replicationcontrollers "busybox0" pausing is not supported
I1018 17:56:45.514] Successful
I1018 17:56:45.515] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1018 17:56:45.515] error: replicationcontrollers "busybox0" pausing is not supported
I1018 17:56:45.515] error: replicationcontrollers "busybox1" pausing is not supported
I1018 17:56:45.515] has:replicationcontrollers "busybox1" pausing is not supported
W1018 17:56:45.616] I1018 17:56:45.202135   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421395-13075", Name:"busybox0", UID:"e51d69fd-42fa-4c1b-8e35-71ae327115aa", APIVersion:"v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-vhlsm
W1018 17:56:45.617] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W1018 17:56:45.617] I1018 17:56:45.205605   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421395-13075", Name:"busybox1", UID:"ed703adb-8710-4c82-9473-bdc5c85d87a9", APIVersion:"v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-jwz6k
W1018 17:56:45.617] E1018 17:56:45.353417   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:45.618] E1018 17:56:45.448059   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:45.618] E1018 17:56:45.555241   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:45.659] E1018 17:56:45.659166   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:45.701] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1018 17:56:45.718] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1018 17:56:45.818] Successful
I1018 17:56:45.819] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1018 17:56:45.819] error: replicationcontrollers "busybox0" resuming is not supported
I1018 17:56:45.819] error: replicationcontrollers "busybox1" resuming is not supported
I1018 17:56:45.819] has:Object 'Kind' is missing
I1018 17:56:45.819] Successful
I1018 17:56:45.820] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1018 17:56:45.820] error: replicationcontrollers "busybox0" resuming is not supported
I1018 17:56:45.820] error: replicationcontrollers "busybox1" resuming is not supported
I1018 17:56:45.820] has:replicationcontrollers "busybox0" resuming is not supported
I1018 17:56:45.820] Successful
I1018 17:56:45.821] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I1018 17:56:45.821] error: replicationcontrollers "busybox0" resuming is not supported
I1018 17:56:45.821] error: replicationcontrollers "busybox1" resuming is not supported
I1018 17:56:45.821] has:replicationcontrollers "busybox1" resuming is not supported
I1018 17:56:45.821] replicationcontroller "busybox0" force deleted
I1018 17:56:45.821] replicationcontroller "busybox1" force deleted
W1018 17:56:46.355] E1018 17:56:46.355084   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:46.450] E1018 17:56:46.449947   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:46.557] E1018 17:56:46.556728   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:46.661] E1018 17:56:46.660840   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:56:46.762] Recording: run_namespace_tests
I1018 17:56:46.762] Running command: run_namespace_tests
I1018 17:56:46.762] 
I1018 17:56:46.762] +++ Running case: test-cmd.run_namespace_tests 
I1018 17:56:46.762] +++ working dir: /go/src/k8s.io/kubernetes
I1018 17:56:46.762] +++ command: run_namespace_tests
I1018 17:56:46.771] +++ [1018 17:56:46] Testing kubectl(v1:namespaces)
I1018 17:56:46.847] namespace/my-namespace created
I1018 17:56:46.939] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1018 17:56:47.019] (Bnamespace "my-namespace" deleted
W1018 17:56:47.357] E1018 17:56:47.356513   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:47.452] E1018 17:56:47.451743   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:47.559] E1018 17:56:47.558896   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:47.663] E1018 17:56:47.662255   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:48.358] E1018 17:56:48.358129   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:48.454] E1018 17:56:48.453641   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:48.561] E1018 17:56:48.560525   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:48.664] E1018 17:56:48.664051   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:49.360] E1018 17:56:49.359704   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:49.455] E1018 17:56:49.455125   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:49.562] E1018 17:56:49.561927   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:49.666] E1018 17:56:49.665456   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:50.361] E1018 17:56:50.361208   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:50.457] E1018 17:56:50.456714   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:50.564] E1018 17:56:50.564015   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:50.667] E1018 17:56:50.666977   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:51.363] E1018 17:56:51.362686   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:51.459] E1018 17:56:51.458481   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:51.566] E1018 17:56:51.565464   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:51.669] E1018 17:56:51.668703   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:51.989] I1018 17:56:51.989279   53086 shared_informer.go:197] Waiting for caches to sync for resource quota
W1018 17:56:51.990] I1018 17:56:51.989352   53086 shared_informer.go:204] Caches are synced for resource quota 
I1018 17:56:52.120] namespace/my-namespace condition met
I1018 17:56:52.214] Successful
I1018 17:56:52.215] message:Error from server (NotFound): namespaces "my-namespace" not found
I1018 17:56:52.215] has: not found
I1018 17:56:52.288] namespace/my-namespace created
I1018 17:56:52.384] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I1018 17:56:52.589] (BSuccessful
I1018 17:56:52.589] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I1018 17:56:52.589] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I1018 17:56:52.593] namespace "namespace-1571421360-31536" deleted
I1018 17:56:52.593] namespace "namespace-1571421361-2268" deleted
I1018 17:56:52.593] namespace "namespace-1571421363-31866" deleted
I1018 17:56:52.593] namespace "namespace-1571421365-21759" deleted
I1018 17:56:52.593] namespace "namespace-1571421395-13075" deleted
I1018 17:56:52.593] namespace "namespace-1571421395-6041" deleted
I1018 17:56:52.593] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I1018 17:56:52.594] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I1018 17:56:52.594] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I1018 17:56:52.594] has:warning: deleting cluster-scoped resources
I1018 17:56:52.594] Successful
I1018 17:56:52.594] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I1018 17:56:52.594] namespace "kube-node-lease" deleted
I1018 17:56:52.595] namespace "my-namespace" deleted
I1018 17:56:52.595] namespace "namespace-1571421260-13696" deleted
... skipping 27 lines ...
I1018 17:56:52.598] namespace "namespace-1571421360-31536" deleted
I1018 17:56:52.598] namespace "namespace-1571421361-2268" deleted
I1018 17:56:52.598] namespace "namespace-1571421363-31866" deleted
I1018 17:56:52.598] namespace "namespace-1571421365-21759" deleted
I1018 17:56:52.598] namespace "namespace-1571421395-13075" deleted
I1018 17:56:52.598] namespace "namespace-1571421395-6041" deleted
I1018 17:56:52.598] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I1018 17:56:52.599] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I1018 17:56:52.599] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I1018 17:56:52.599] has:namespace "my-namespace" deleted
I1018 17:56:52.704] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I1018 17:56:52.784] (Bnamespace/other created
I1018 17:56:52.888] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I1018 17:56:52.984] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:56:53.164] (Bpod/valid-pod created
W1018 17:56:53.265] E1018 17:56:52.363724   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:53.266] I1018 17:56:52.412033   53086 shared_informer.go:197] Waiting for caches to sync for garbage collector
W1018 17:56:53.266] I1018 17:56:52.412518   53086 shared_informer.go:204] Caches are synced for garbage collector 
W1018 17:56:53.266] E1018 17:56:52.459750   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:53.266] E1018 17:56:52.566677   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:53.267] E1018 17:56:52.670292   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:53.366] E1018 17:56:53.365323   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:53.462] E1018 17:56:53.461608   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:56:53.562] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1018 17:56:53.563] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1018 17:56:53.563] (BSuccessful
I1018 17:56:53.564] message:error: a resource cannot be retrieved by name across all namespaces
I1018 17:56:53.564] has:a resource cannot be retrieved by name across all namespaces
I1018 17:56:53.576] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I1018 17:56:53.659] (Bpod "valid-pod" force deleted
I1018 17:56:53.759] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:56:53.838] (Bnamespace "other" deleted
W1018 17:56:53.939] E1018 17:56:53.568237   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:53.939] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W1018 17:56:53.940] E1018 17:56:53.672039   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:54.368] E1018 17:56:54.367145   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:54.464] E1018 17:56:54.463945   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:54.570] E1018 17:56:54.569648   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:54.674] E1018 17:56:54.673653   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:55.369] E1018 17:56:55.368831   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:55.466] E1018 17:56:55.465607   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:55.571] E1018 17:56:55.571048   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:55.676] E1018 17:56:55.675303   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:55.705] I1018 17:56:55.704531   53086 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1571421395-13075
W1018 17:56:55.709] I1018 17:56:55.708609   53086 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1571421395-13075
W1018 17:56:56.371] E1018 17:56:56.370380   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:56.467] E1018 17:56:56.467072   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:56.573] E1018 17:56:56.572514   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:56.677] E1018 17:56:56.676873   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:57.372] E1018 17:56:57.371995   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:57.468] E1018 17:56:57.468272   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:57.574] E1018 17:56:57.573378   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:57.687] E1018 17:56:57.686381   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:58.377] E1018 17:56:58.376324   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:58.470] E1018 17:56:58.470032   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:58.575] E1018 17:56:58.574474   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:56:58.690] E1018 17:56:58.689524   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:56:58.972] +++ exit code: 0
I1018 17:56:59.013] Recording: run_secrets_test
I1018 17:56:59.014] Running command: run_secrets_test
I1018 17:56:59.042] 
I1018 17:56:59.045] +++ Running case: test-cmd.run_secrets_test 
I1018 17:56:59.049] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 44 lines ...
I1018 17:56:59.683] (Bcore.sh:733: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:56:59.767] (Bsecret/test-secret created
I1018 17:56:59.867] core.sh:737: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I1018 17:56:59.971] (Bcore.sh:738: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
I1018 17:57:00.142] (Bsecret "test-secret" deleted
W1018 17:57:00.242] I1018 17:56:59.309309   69022 loader.go:375] Config loaded from file:  /tmp/tmp.pL34uig2M7/.kube/config
W1018 17:57:00.243] E1018 17:56:59.377730   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:00.243] E1018 17:56:59.471524   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:00.243] E1018 17:56:59.575671   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:00.244] E1018 17:56:59.691378   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:00.344] core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:00.344] (Bsecret/test-secret created
I1018 17:57:00.429] core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I1018 17:57:00.530] (Bcore.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
I1018 17:57:00.737] (Bsecret "test-secret" deleted
W1018 17:57:00.838] E1018 17:57:00.379382   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:00.838] E1018 17:57:00.474425   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:00.839] E1018 17:57:00.583878   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:00.839] E1018 17:57:00.693010   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:00.939] core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:00.940] (Bsecret/test-secret created
I1018 17:57:01.048] core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I1018 17:57:01.147] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I1018 17:57:01.240] (Bsecret "test-secret" deleted
I1018 17:57:01.335] secret/test-secret created
W1018 17:57:01.436] E1018 17:57:01.381193   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:01.476] E1018 17:57:01.476063   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:01.577] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I1018 17:57:01.577] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I1018 17:57:01.647] (Bsecret "test-secret" deleted
W1018 17:57:01.748] E1018 17:57:01.585326   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:01.749] E1018 17:57:01.694643   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:01.849] secret/secret-string-data created
I1018 17:57:01.932] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I1018 17:57:02.038] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I1018 17:57:02.144] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I1018 17:57:02.239] (Bsecret "secret-string-data" deleted
W1018 17:57:02.340] I1018 17:57:02.207528   53086 namespace_controller.go:185] Namespace has been deleted my-namespace
W1018 17:57:02.383] E1018 17:57:02.382699   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:02.478] E1018 17:57:02.477475   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:02.578] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:02.579] (Bsecret "test-secret" deleted
I1018 17:57:02.624] namespace "test-secrets" deleted
W1018 17:57:02.725] E1018 17:57:02.586599   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:02.726] E1018 17:57:02.696233   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:02.751] I1018 17:57:02.750492   53086 namespace_controller.go:185] Namespace has been deleted kube-node-lease
W1018 17:57:02.778] I1018 17:57:02.778003   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421282-32091
W1018 17:57:02.779] I1018 17:57:02.778055   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421260-13696
W1018 17:57:02.779] I1018 17:57:02.778089   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421276-6391
W1018 17:57:02.787] I1018 17:57:02.787231   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421281-2290
W1018 17:57:02.802] I1018 17:57:02.801507   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421273-19860
... skipping 15 lines ...
W1018 17:57:03.334] I1018 17:57:03.333211   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421339-23211
W1018 17:57:03.334] I1018 17:57:03.333256   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421340-3736
W1018 17:57:03.356] I1018 17:57:03.356132   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421350-27629
W1018 17:57:03.368] I1018 17:57:03.367338   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421319-24237
W1018 17:57:03.376] I1018 17:57:03.375401   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421360-15906
W1018 17:57:03.376] I1018 17:57:03.375401   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421341-13705
W1018 17:57:03.384] E1018 17:57:03.384277   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:03.423] I1018 17:57:03.422519   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421360-31536
W1018 17:57:03.440] I1018 17:57:03.439392   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421356-10370
W1018 17:57:03.456] I1018 17:57:03.456031   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421357-21196
W1018 17:57:03.480] E1018 17:57:03.479555   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:03.525] I1018 17:57:03.525266   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421363-31866
W1018 17:57:03.528] I1018 17:57:03.528375   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421361-2268
W1018 17:57:03.539] I1018 17:57:03.539008   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421365-21759
W1018 17:57:03.553] I1018 17:57:03.552463   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421395-6041
W1018 17:57:03.588] E1018 17:57:03.587996   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:03.610] I1018 17:57:03.610127   53086 namespace_controller.go:185] Namespace has been deleted namespace-1571421395-13075
W1018 17:57:03.698] E1018 17:57:03.697916   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:03.941] I1018 17:57:03.941133   53086 namespace_controller.go:185] Namespace has been deleted other
W1018 17:57:04.386] E1018 17:57:04.385993   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:04.482] E1018 17:57:04.481367   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:04.590] E1018 17:57:04.589677   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:04.700] E1018 17:57:04.699790   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:05.388] E1018 17:57:05.388135   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:05.483] E1018 17:57:05.483289   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:05.591] E1018 17:57:05.591229   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:05.702] E1018 17:57:05.701763   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:06.390] E1018 17:57:06.389463   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:06.486] E1018 17:57:06.485408   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:06.593] E1018 17:57:06.592678   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:06.703] E1018 17:57:06.703167   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:07.391] E1018 17:57:07.390987   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:07.487] E1018 17:57:07.486993   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:07.594] E1018 17:57:07.594061   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:07.705] E1018 17:57:07.704711   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:07.806] +++ exit code: 0
I1018 17:57:07.807] Recording: run_configmap_tests
I1018 17:57:07.807] Running command: run_configmap_tests
I1018 17:57:07.807] 
I1018 17:57:07.807] +++ Running case: test-cmd.run_configmap_tests 
I1018 17:57:07.807] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I1018 17:57:08.999] configmap/test-binary-configmap created
I1018 17:57:09.104] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I1018 17:57:09.196] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I1018 17:57:09.465] (Bconfigmap "test-configmap" deleted
I1018 17:57:09.552] configmap "test-binary-configmap" deleted
I1018 17:57:09.639] namespace "test-configmaps" deleted
W1018 17:57:09.740] E1018 17:57:08.392540   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:09.740] E1018 17:57:08.488349   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:09.740] E1018 17:57:08.595576   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:09.741] E1018 17:57:08.706759   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:09.741] E1018 17:57:09.393899   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:09.741] E1018 17:57:09.489710   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:09.741] E1018 17:57:09.596929   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:09.742] E1018 17:57:09.708230   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:10.396] E1018 17:57:10.395909   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:10.491] E1018 17:57:10.491083   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:10.599] E1018 17:57:10.598606   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:10.710] E1018 17:57:10.709994   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:11.398] E1018 17:57:11.397530   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:11.493] E1018 17:57:11.492518   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:11.601] E1018 17:57:11.600619   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:11.712] E1018 17:57:11.711568   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:12.399] E1018 17:57:12.399209   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:12.494] E1018 17:57:12.494024   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:12.603] E1018 17:57:12.602526   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:12.711] I1018 17:57:12.709950   53086 namespace_controller.go:185] Namespace has been deleted test-secrets
W1018 17:57:12.713] E1018 17:57:12.713158   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:13.401] E1018 17:57:13.400478   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:13.496] E1018 17:57:13.496258   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:13.605] E1018 17:57:13.604923   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:13.715] E1018 17:57:13.714524   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:14.402] E1018 17:57:14.401707   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:14.498] E1018 17:57:14.497692   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:14.606] E1018 17:57:14.606190   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:14.716] E1018 17:57:14.715596   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:14.817] +++ exit code: 0
I1018 17:57:14.817] Recording: run_client_config_tests
I1018 17:57:14.818] Running command: run_client_config_tests
I1018 17:57:14.824] 
I1018 17:57:14.827] +++ Running case: test-cmd.run_client_config_tests 
I1018 17:57:14.830] +++ working dir: /go/src/k8s.io/kubernetes
I1018 17:57:14.833] +++ command: run_client_config_tests
I1018 17:57:14.845] +++ [1018 17:57:14] Creating namespace namespace-1571421434-352
I1018 17:57:14.921] namespace/namespace-1571421434-352 created
I1018 17:57:14.997] Context "test" modified.
I1018 17:57:15.004] +++ [1018 17:57:15] Testing client config
I1018 17:57:15.078] Successful
I1018 17:57:15.079] message:error: stat missing: no such file or directory
I1018 17:57:15.079] has:missing: no such file or directory
I1018 17:57:15.154] Successful
I1018 17:57:15.154] message:error: stat missing: no such file or directory
I1018 17:57:15.154] has:missing: no such file or directory
I1018 17:57:15.231] Successful
I1018 17:57:15.232] message:error: stat missing: no such file or directory
I1018 17:57:15.232] has:missing: no such file or directory
I1018 17:57:15.308] Successful
I1018 17:57:15.309] message:Error in configuration: context was not found for specified context: missing-context
I1018 17:57:15.309] has:context was not found for specified context: missing-context
I1018 17:57:15.383] Successful
I1018 17:57:15.383] message:error: no server found for cluster "missing-cluster"
I1018 17:57:15.383] has:no server found for cluster "missing-cluster"
I1018 17:57:15.459] Successful
I1018 17:57:15.459] message:error: auth info "missing-user" does not exist
I1018 17:57:15.459] has:auth info "missing-user" does not exist
W1018 17:57:15.560] E1018 17:57:15.403031   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:15.560] E1018 17:57:15.499387   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:15.608] E1018 17:57:15.608079   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:15.709] Successful
I1018 17:57:15.710] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I1018 17:57:15.710] has:error loading config file
I1018 17:57:15.711] Successful
I1018 17:57:15.711] message:error: stat missing-config: no such file or directory
I1018 17:57:15.711] has:no such file or directory
I1018 17:57:15.711] +++ exit code: 0
I1018 17:57:15.734] Recording: run_service_accounts_tests
I1018 17:57:15.735] Running command: run_service_accounts_tests
I1018 17:57:15.761] 
I1018 17:57:15.764] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I1018 17:57:16.108] (Bnamespace/test-service-accounts created
I1018 17:57:16.206] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I1018 17:57:16.282] (Bserviceaccount/test-service-account created
I1018 17:57:16.381] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I1018 17:57:16.464] (Bserviceaccount "test-service-account" deleted
I1018 17:57:16.554] namespace "test-service-accounts" deleted
W1018 17:57:16.655] E1018 17:57:15.717052   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:16.655] E1018 17:57:16.404468   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:16.656] E1018 17:57:16.500819   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:16.656] E1018 17:57:16.609537   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:16.719] E1018 17:57:16.718492   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:17.406] E1018 17:57:17.406146   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:17.503] E1018 17:57:17.502420   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:17.612] E1018 17:57:17.611886   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:17.720] E1018 17:57:17.719991   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:18.408] E1018 17:57:18.407975   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:18.504] E1018 17:57:18.503895   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:18.614] E1018 17:57:18.614014   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:18.722] E1018 17:57:18.721499   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:19.410] E1018 17:57:19.409386   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:19.505] E1018 17:57:19.505322   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:19.616] E1018 17:57:19.615488   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:19.723] E1018 17:57:19.723033   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:19.736] I1018 17:57:19.735874   53086 namespace_controller.go:185] Namespace has been deleted test-configmaps
W1018 17:57:20.411] E1018 17:57:20.411003   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:20.507] E1018 17:57:20.506880   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:20.617] E1018 17:57:20.617132   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:20.726] E1018 17:57:20.726041   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:21.413] E1018 17:57:21.412463   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:21.509] E1018 17:57:21.508605   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:21.619] E1018 17:57:21.618220   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:21.719] +++ exit code: 0
I1018 17:57:21.719] Recording: run_job_tests
I1018 17:57:21.720] Running command: run_job_tests
I1018 17:57:21.749] 
I1018 17:57:21.752] +++ Running case: test-cmd.run_job_tests 
I1018 17:57:21.756] +++ working dir: /go/src/k8s.io/kubernetes
I1018 17:57:21.759] +++ command: run_job_tests
I1018 17:57:21.772] +++ [1018 17:57:21] Creating namespace namespace-1571421441-16752
I1018 17:57:21.854] namespace/namespace-1571421441-16752 created
I1018 17:57:21.927] Context "test" modified.
I1018 17:57:21.935] +++ [1018 17:57:21] Testing job
W1018 17:57:22.036] E1018 17:57:21.727542   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:22.137] batch.sh:30: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: :
I1018 17:57:22.137] (Bnamespace/test-jobs created
I1018 17:57:22.220] batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
I1018 17:57:22.307] (Bcronjob.batch/pi created
I1018 17:57:22.400] batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
I1018 17:57:22.480] (BNAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
... skipping 3 lines ...
I1018 17:57:22.572] Labels:                        run=pi
I1018 17:57:22.573] Annotations:                   <none>
I1018 17:57:22.573] Schedule:                      59 23 31 2 *
I1018 17:57:22.573] Concurrency Policy:            Allow
I1018 17:57:22.574] Suspend:                       False
I1018 17:57:22.574] Successful Job History Limit:  3
I1018 17:57:22.574] Failed Job History Limit:      1
I1018 17:57:22.575] Starting Deadline Seconds:     <unset>
I1018 17:57:22.575] Selector:                      <unset>
I1018 17:57:22.575] Parallelism:                   <unset>
I1018 17:57:22.575] Completions:                   <unset>
I1018 17:57:22.575] Pod Template:
I1018 17:57:22.575]   Labels:  run=pi
... skipping 32 lines ...
I1018 17:57:23.124]                 run=pi
I1018 17:57:23.125] Annotations:    cronjob.kubernetes.io/instantiate: manual
I1018 17:57:23.125] Controlled By:  CronJob/pi
I1018 17:57:23.125] Parallelism:    1
I1018 17:57:23.125] Completions:    1
I1018 17:57:23.125] Start Time:     Fri, 18 Oct 2019 17:57:22 +0000
I1018 17:57:23.125] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I1018 17:57:23.125] Pod Template:
I1018 17:57:23.126]   Labels:  controller-uid=2f0651e8-3d73-466a-87fe-156c44e3637d
I1018 17:57:23.126]            job-name=test-job
I1018 17:57:23.126]            run=pi
I1018 17:57:23.126]   Containers:
I1018 17:57:23.126]    pi:
... skipping 16 lines ...
I1018 17:57:23.130]   ----    ------            ----  ----            -------
I1018 17:57:23.130]   Normal  SuccessfulCreate  1s    job-controller  Created pod: test-job-kvfjr
I1018 17:57:23.212] job.batch "test-job" deleted
I1018 17:57:23.311] cronjob.batch "pi" deleted
I1018 17:57:23.404] namespace "test-jobs" deleted
W1018 17:57:23.505] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1018 17:57:23.505] E1018 17:57:22.414346   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:23.506] E1018 17:57:22.510508   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:23.506] E1018 17:57:22.619700   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:23.506] E1018 17:57:22.729194   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:23.507] I1018 17:57:22.850312   53086 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"2f0651e8-3d73-466a-87fe-156c44e3637d", APIVersion:"batch/v1", ResourceVersion:"1396", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-kvfjr
W1018 17:57:23.507] E1018 17:57:23.415595   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:23.512] E1018 17:57:23.512195   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:23.621] E1018 17:57:23.621280   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:23.731] E1018 17:57:23.730665   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:24.418] E1018 17:57:24.417256   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:24.514] E1018 17:57:24.513653   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:24.623] E1018 17:57:24.623328   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:24.733] E1018 17:57:24.732669   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:25.420] E1018 17:57:25.419385   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:25.516] E1018 17:57:25.515640   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:25.626] E1018 17:57:25.625391   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:25.735] E1018 17:57:25.734380   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:26.422] E1018 17:57:26.421404   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:26.517] E1018 17:57:26.516839   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:26.627] E1018 17:57:26.626980   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:26.652] I1018 17:57:26.652351   53086 namespace_controller.go:185] Namespace has been deleted test-service-accounts
W1018 17:57:26.736] E1018 17:57:26.735776   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:27.423] E1018 17:57:27.422869   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:27.519] E1018 17:57:27.518572   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:27.629] E1018 17:57:27.628324   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:27.738] E1018 17:57:27.737302   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:28.425] E1018 17:57:28.424282   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:28.520] E1018 17:57:28.519822   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:28.620] +++ exit code: 0
I1018 17:57:28.621] Recording: run_create_job_tests
I1018 17:57:28.621] Running command: run_create_job_tests
I1018 17:57:28.621] 
I1018 17:57:28.621] +++ Running case: test-cmd.run_create_job_tests 
I1018 17:57:28.621] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 2 lines ...
I1018 17:57:28.705] namespace/namespace-1571421448-1335 created
I1018 17:57:28.781] Context "test" modified.
I1018 17:57:28.866] job.batch/test-job created
I1018 17:57:28.965] create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
I1018 17:57:29.062] (Bjob.batch "test-job" deleted
I1018 17:57:29.162] job.batch/test-job-pi created
W1018 17:57:29.263] E1018 17:57:28.629753   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:29.263] E1018 17:57:28.738970   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:29.264] I1018 17:57:28.861250   53086 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571421448-1335", Name:"test-job", UID:"1431db6a-34eb-47b6-bb6d-20e3071157ed", APIVersion:"batch/v1", ResourceVersion:"1415", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-59vmp
W1018 17:57:29.265] I1018 17:57:29.153976   53086 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571421448-1335", Name:"test-job-pi", UID:"e3f67808-fbf0-456b-a67f-5202cab8344b", APIVersion:"batch/v1", ResourceVersion:"1422", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-n4rhq
I1018 17:57:29.365] create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
I1018 17:57:29.366] (Bjob.batch "test-job-pi" deleted
I1018 17:57:29.451] cronjob.batch/test-pi created
W1018 17:57:29.551] E1018 17:57:29.425674   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:29.552] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1018 17:57:29.552] E1018 17:57:29.521271   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:29.554] I1018 17:57:29.553874   53086 event.go:262] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1571421448-1335", Name:"my-pi", UID:"e9f2a96b-2300-4ded-996a-06e6d451eb58", APIVersion:"batch/v1", ResourceVersion:"1431", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-7n24f
W1018 17:57:29.632] E1018 17:57:29.631334   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:29.643] I1018 17:57:29.642119   53086 event.go:262] Event(v1.ObjectReference{Kind:"CronJob", Namespace:"namespace-1571421448-1335", Name:"test-pi", UID:"1c29a341-c0d6-43f2-9c54-4744b42d928a", APIVersion:"batch/v1beta1", ResourceVersion:"1430", FieldPath:""}): type: 'Warning' reason: 'UnexpectedJob' Saw a job that the controller did not create or forgot: my-pi
W1018 17:57:29.741] E1018 17:57:29.740732   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:29.841] job.batch/my-pi created
I1018 17:57:29.842] Successful
I1018 17:57:29.842] message:[perl -Mbignum=bpi -wle print bpi(10)]
I1018 17:57:29.842] has:perl -Mbignum=bpi -wle print bpi(10)
I1018 17:57:29.842] job.batch "my-pi" deleted
I1018 17:57:29.842] cronjob.batch "test-pi" deleted
... skipping 8 lines ...
I1018 17:57:30.025] namespace/namespace-1571421449-2333 created
I1018 17:57:30.102] Context "test" modified.
I1018 17:57:30.110] +++ [1018 17:57:30] Testing pod templates
I1018 17:57:30.213] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:30.416] (Bpodtemplate/nginx created
W1018 17:57:30.517] I1018 17:57:30.413137   49536 controller.go:606] quota admission added evaluator for: podtemplates
W1018 17:57:30.517] E1018 17:57:30.427287   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:30.523] E1018 17:57:30.522539   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:30.623] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1018 17:57:30.633] (BNAME    CONTAINERS   IMAGES   POD LABELS
I1018 17:57:30.635] nginx   nginx        nginx    name=nginx
W1018 17:57:30.736] E1018 17:57:30.632876   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:30.742] E1018 17:57:30.742242   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:30.872] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1018 17:57:30.967] (Bpodtemplate "nginx" deleted
I1018 17:57:31.087] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:31.103] (B+++ exit code: 0
I1018 17:57:31.146] Recording: run_service_tests
I1018 17:57:31.146] Running command: run_service_tests
... skipping 2 lines ...
I1018 17:57:31.183] +++ working dir: /go/src/k8s.io/kubernetes
I1018 17:57:31.187] +++ command: run_service_tests
I1018 17:57:31.280] Context "test" modified.
I1018 17:57:31.289] +++ [1018 17:57:31] Testing kubectl(v1:services)
I1018 17:57:31.402] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1018 17:57:31.594] (Bservice/redis-master created
W1018 17:57:31.695] E1018 17:57:31.428632   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:31.696] E1018 17:57:31.524637   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:31.696] E1018 17:57:31.634370   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:31.745] E1018 17:57:31.744658   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:31.845] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1018 17:57:31.846] (B
I1018 17:57:31.846] core.sh:864: FAIL!
I1018 17:57:31.846] Describe services redis-master
I1018 17:57:31.846]   Expected Match: Name:
I1018 17:57:31.846]   Not found in:
I1018 17:57:31.847] Name:              redis-master
I1018 17:57:31.847] Namespace:         default
I1018 17:57:31.847] Labels:            app=redis
... skipping 56 lines ...
I1018 17:57:32.222] TargetPort:        6379/TCP
I1018 17:57:32.222] Endpoints:         <none>
I1018 17:57:32.222] Session Affinity:  None
I1018 17:57:32.222] Events:            <none>
I1018 17:57:32.223] (B
I1018 17:57:32.339] 
I1018 17:57:32.339] FAIL!
I1018 17:57:32.340] Describe services
I1018 17:57:32.340]   Expected Match: Name:
I1018 17:57:32.340]   Not found in:
I1018 17:57:32.341] Name:              kubernetes
I1018 17:57:32.341] Namespace:         default
I1018 17:57:32.341] Labels:            component=apiserver
... skipping 23 lines ...
I1018 17:57:32.344] Endpoints:         <none>
I1018 17:57:32.344] Session Affinity:  None
I1018 17:57:32.344] Events:            <none>
I1018 17:57:32.344] (B
I1018 17:57:32.345] 872 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/core.sh
I1018 17:57:32.345] (B
W1018 17:57:32.445] E1018 17:57:32.429925   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:32.527] E1018 17:57:32.527204   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:32.628] Successful describe
I1018 17:57:32.629] Name:              kubernetes
I1018 17:57:32.629] Namespace:         default
I1018 17:57:32.629] Labels:            component=apiserver
I1018 17:57:32.629]                    provider=kubernetes
I1018 17:57:32.629] Annotations:       <none>
... skipping 148 lines ...
I1018 17:57:33.383]   selector:
I1018 17:57:33.383]     role: padawan
I1018 17:57:33.383]   sessionAffinity: None
I1018 17:57:33.383]   type: ClusterIP
I1018 17:57:33.383] status:
I1018 17:57:33.383]   loadBalancer: {}
W1018 17:57:33.484] E1018 17:57:32.636082   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:33.484] E1018 17:57:32.746423   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:33.485] E1018 17:57:33.431811   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:33.485] error: you must specify resources by --filename when --local is set.
W1018 17:57:33.485] Example resource specifications include:
W1018 17:57:33.485]    '-f rsrc.yaml'
W1018 17:57:33.485]    '--filename=rsrc.json'
W1018 17:57:33.502] I1018 17:57:33.501952   53086 namespace_controller.go:185] Namespace has been deleted test-jobs
W1018 17:57:33.529] E1018 17:57:33.529188   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:33.630] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I1018 17:57:33.738] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1018 17:57:33.821] (Bservice "redis-master" deleted
I1018 17:57:33.917] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1018 17:57:34.010] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1018 17:57:34.178] (Bservice/redis-master created
W1018 17:57:34.279] E1018 17:57:33.640306   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:34.279] E1018 17:57:33.747598   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:34.380] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1018 17:57:34.391] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I1018 17:57:34.579] (Bservice/service-v1-test created
W1018 17:57:34.679] E1018 17:57:34.433497   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:34.680] E1018 17:57:34.531159   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:34.680] E1018 17:57:34.641671   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:34.749] E1018 17:57:34.748979   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:34.850] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I1018 17:57:34.884] (Bservice/service-v1-test replaced
I1018 17:57:34.992] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I1018 17:57:35.079] (Bservice "redis-master" deleted
I1018 17:57:35.171] service "service-v1-test" deleted
I1018 17:57:35.275] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1018 17:57:35.368] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I1018 17:57:35.524] (Bservice/redis-master created
W1018 17:57:35.625] E1018 17:57:35.435055   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:35.626] E1018 17:57:35.532559   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:35.644] E1018 17:57:35.643916   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:35.745] service/redis-slave created
I1018 17:57:35.783] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I1018 17:57:35.865] (BSuccessful
I1018 17:57:35.865] message:NAME           RSRC
I1018 17:57:35.865] kubernetes     145
I1018 17:57:35.865] redis-master   1468
... skipping 33 lines ...
I1018 17:57:37.823] apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:38.011] (Bdaemonset.apps/bind created
I1018 17:57:38.111] apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
I1018 17:57:38.287] (Bdaemonset.apps/bind configured
I1018 17:57:38.387] apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
I1018 17:57:38.489] (Bdaemonset.apps/bind image updated
W1018 17:57:38.589] E1018 17:57:35.750553   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:38.590] E1018 17:57:36.436619   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:38.590] E1018 17:57:36.534271   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:38.591] E1018 17:57:36.645486   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:38.591] E1018 17:57:36.752262   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:38.591] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W1018 17:57:38.592] I1018 17:57:36.862943   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"24b51779-07d1-497f-b24d-f0e8965ea26f", APIVersion:"apps/v1", ResourceVersion:"1484", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
W1018 17:57:38.592] I1018 17:57:36.867991   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"f1179b30-6911-4cf3-be8d-de6f57ddb9f7", APIVersion:"apps/v1", ResourceVersion:"1485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-5q84q
W1018 17:57:38.593] I1018 17:57:36.871308   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"f1179b30-6911-4cf3-be8d-de6f57ddb9f7", APIVersion:"apps/v1", ResourceVersion:"1485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-hz4tc
W1018 17:57:38.593] E1018 17:57:37.438135   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:38.593] E1018 17:57:37.535946   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:38.594] E1018 17:57:37.648108   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:38.594] E1018 17:57:37.754057   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:38.594] I1018 17:57:38.008437   49536 controller.go:606] quota admission added evaluator for: daemonsets.apps
W1018 17:57:38.594] I1018 17:57:38.019902   49536 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W1018 17:57:38.595] E1018 17:57:38.439497   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:38.595] E1018 17:57:38.537347   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:38.650] E1018 17:57:38.649728   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:38.751] apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
I1018 17:57:38.751] (Bdaemonset.apps/bind env updated
I1018 17:57:38.821] apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
I1018 17:57:38.923] (Bdaemonset.apps/bind resource requirements updated
W1018 17:57:39.024] E1018 17:57:38.755701   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:39.124] apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
I1018 17:57:39.136] (Bdaemonset.apps/bind restarted
I1018 17:57:39.236] apps.sh:48: Successful get daemonsets bind {{.metadata.generation}}: 5
I1018 17:57:39.319] (Bdaemonset.apps "bind" deleted
I1018 17:57:39.345] +++ exit code: 0
I1018 17:57:39.381] Recording: run_daemonset_history_tests
... skipping 5 lines ...
I1018 17:57:39.427] +++ [1018 17:57:39] Creating namespace namespace-1571421459-7642
I1018 17:57:39.500] namespace/namespace-1571421459-7642 created
I1018 17:57:39.579] Context "test" modified.
I1018 17:57:39.586] +++ [1018 17:57:39] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
I1018 17:57:39.680] apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:39.851] (Bdaemonset.apps/bind created
W1018 17:57:39.952] E1018 17:57:39.441368   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:39.952] E1018 17:57:39.539303   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:39.952] E1018 17:57:39.657834   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:39.952] E1018 17:57:39.757523   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:40.054] apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1571421459-7642"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I1018 17:57:40.054]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
I1018 17:57:40.067] (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
I1018 17:57:40.173] apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1018 17:57:40.269] (Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1018 17:57:40.445] (Bdaemonset.apps/bind configured
W1018 17:57:40.546] E1018 17:57:40.442636   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:40.546] E1018 17:57:40.540572   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:40.647] apps.sh:77: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1018 17:57:40.677] (Bapps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1018 17:57:40.778] (Bapps.sh:79: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1018 17:57:40.883] (Bapps.sh:80: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1571421459-7642"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I1018 17:57:40.884]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1571421459-7642"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:latest","name":"kubernetes-pause"},{"image":"k8s.gcr.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I1018 17:57:40.885]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
... skipping 5 lines ...
I1018 17:57:40.995]     Port:	<none>
I1018 17:57:40.995]     Host Port:	<none>
I1018 17:57:40.995]     Environment:	<none>
I1018 17:57:40.995]     Mounts:	<none>
I1018 17:57:40.995]   Volumes:	<none>
I1018 17:57:40.995]  (dry run)
W1018 17:57:41.096] E1018 17:57:40.659242   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:41.096] E1018 17:57:40.759384   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:41.197] apps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1018 17:57:41.217] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1018 17:57:41.319] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I1018 17:57:41.433] (Bdaemonset.apps/bind rolled back
I1018 17:57:41.535] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1018 17:57:41.635] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1018 17:57:41.771] (BSuccessful
I1018 17:57:41.772] message:error: unable to find specified revision 1000000 in history
I1018 17:57:41.772] has:unable to find specified revision
I1018 17:57:41.875] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I1018 17:57:41.979] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I1018 17:57:42.093] (Bdaemonset.apps/bind rolled back
I1018 17:57:42.196] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I1018 17:57:42.302] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 10 lines ...
I1018 17:57:42.663] namespace/namespace-1571421462-16550 created
I1018 17:57:42.735] Context "test" modified.
I1018 17:57:42.742] +++ [1018 17:57:42] Testing kubectl(v1:replicationcontrollers)
I1018 17:57:42.834] core.sh:1046: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:42.991] (Breplicationcontroller/frontend created
I1018 17:57:43.085] replicationcontroller "frontend" deleted
W1018 17:57:43.186] E1018 17:57:41.444068   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:43.187] E1018 17:57:41.541938   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:43.187] E1018 17:57:41.660975   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:43.188] E1018 17:57:41.763423   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:43.192] E1018 17:57:42.104989   53086 daemon_controller.go:302] namespace-1571421459-7642/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1571421459-7642", SelfLink:"/apis/apps/v1/namespaces/namespace-1571421459-7642/daemonsets/bind", UID:"535bd393-2e57-4d7b-8e97-3d05121ed141", ResourceVersion:"1553", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63707018259, loc:(*time.Location)(0x776a040)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1571421459-7642\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001ebb0e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002d4c898), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0029e65a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001ebb100), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00096e848)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002d4c8ec)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W1018 17:57:43.193] E1018 17:57:42.445661   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:43.194] E1018 17:57:42.543285   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:43.194] E1018 17:57:42.662021   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:43.195] E1018 17:57:42.764833   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:43.195] I1018 17:57:42.996523   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"1ce0df51-9fd0-4d65-82b3-8b87f17d246c", APIVersion:"v1", ResourceVersion:"1561", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4hx97
W1018 17:57:43.196] I1018 17:57:42.999149   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"1ce0df51-9fd0-4d65-82b3-8b87f17d246c", APIVersion:"v1", ResourceVersion:"1561", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hqcnv
W1018 17:57:43.197] I1018 17:57:42.999315   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"1ce0df51-9fd0-4d65-82b3-8b87f17d246c", APIVersion:"v1", ResourceVersion:"1561", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tkcnl
I1018 17:57:43.297] core.sh:1051: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:43.298] (Bcore.sh:1055: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:43.460] (Breplicationcontroller/frontend created
I1018 17:57:43.562] core.sh:1059: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1018 17:57:43.679] (B
I1018 17:57:43.684] core.sh:1061: FAIL!
I1018 17:57:43.684] Describe rc frontend
I1018 17:57:43.684]   Expected Match: Name:
I1018 17:57:43.684]   Not found in:
I1018 17:57:43.684] Name:         frontend
I1018 17:57:43.684] Namespace:    namespace-1571421462-16550
I1018 17:57:43.685] Selector:     app=guestbook,tier=frontend
I1018 17:57:43.685] Labels:       app=guestbook
I1018 17:57:43.685]               tier=frontend
I1018 17:57:43.686] Annotations:  <none>
I1018 17:57:43.686] Replicas:     3 current / 3 desired
I1018 17:57:43.686] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1018 17:57:43.686] Pod Template:
I1018 17:57:43.686]   Labels:  app=guestbook
I1018 17:57:43.687]            tier=frontend
I1018 17:57:43.687]   Containers:
I1018 17:57:43.687]    php-redis:
I1018 17:57:43.687]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 20 lines ...
I1018 17:57:43.794] Namespace:    namespace-1571421462-16550
I1018 17:57:43.794] Selector:     app=guestbook,tier=frontend
I1018 17:57:43.794] Labels:       app=guestbook
I1018 17:57:43.794]               tier=frontend
I1018 17:57:43.794] Annotations:  <none>
I1018 17:57:43.794] Replicas:     3 current / 3 desired
I1018 17:57:43.795] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1018 17:57:43.795] Pod Template:
I1018 17:57:43.795]   Labels:  app=guestbook
I1018 17:57:43.795]            tier=frontend
I1018 17:57:43.795]   Containers:
I1018 17:57:43.795]    php-redis:
I1018 17:57:43.795]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I1018 17:57:43.796]   Type    Reason            Age   From                    Message
I1018 17:57:43.796]   ----    ------            ----  ----                    -------
I1018 17:57:43.796]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-t54g4
I1018 17:57:43.796]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-khs8j
I1018 17:57:43.796]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-zg9g7
I1018 17:57:43.796] (B
W1018 17:57:43.897] E1018 17:57:43.446678   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:43.898] I1018 17:57:43.462644   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"14e54e69-9cf8-4c9f-b6ae-c3e4583189c9", APIVersion:"v1", ResourceVersion:"1578", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-t54g4
W1018 17:57:43.898] I1018 17:57:43.465572   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"14e54e69-9cf8-4c9f-b6ae-c3e4583189c9", APIVersion:"v1", ResourceVersion:"1578", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-khs8j
W1018 17:57:43.898] I1018 17:57:43.466172   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"14e54e69-9cf8-4c9f-b6ae-c3e4583189c9", APIVersion:"v1", ResourceVersion:"1578", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zg9g7
W1018 17:57:43.898] E1018 17:57:43.546214   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:43.899] E1018 17:57:43.663491   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:43.899] E1018 17:57:43.766234   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:43.999] core.sh:1065: Successful describe
I1018 17:57:44.000] Name:         frontend
I1018 17:57:44.000] Namespace:    namespace-1571421462-16550
I1018 17:57:44.000] Selector:     app=guestbook,tier=frontend
I1018 17:57:44.000] Labels:       app=guestbook
I1018 17:57:44.000]               tier=frontend
I1018 17:57:44.000] Annotations:  <none>
I1018 17:57:44.001] Replicas:     3 current / 3 desired
I1018 17:57:44.001] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1018 17:57:44.001] Pod Template:
I1018 17:57:44.001]   Labels:  app=guestbook
I1018 17:57:44.001]            tier=frontend
I1018 17:57:44.001]   Containers:
I1018 17:57:44.001]    php-redis:
I1018 17:57:44.002]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I1018 17:57:44.028] Namespace:    namespace-1571421462-16550
I1018 17:57:44.028] Selector:     app=guestbook,tier=frontend
I1018 17:57:44.028] Labels:       app=guestbook
I1018 17:57:44.028]               tier=frontend
I1018 17:57:44.028] Annotations:  <none>
I1018 17:57:44.029] Replicas:     3 current / 3 desired
I1018 17:57:44.029] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1018 17:57:44.029] Pod Template:
I1018 17:57:44.029]   Labels:  app=guestbook
I1018 17:57:44.030]            tier=frontend
I1018 17:57:44.030]   Containers:
I1018 17:57:44.030]    php-redis:
I1018 17:57:44.030]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1018 17:57:44.032]   ----    ------            ----  ----                    -------
I1018 17:57:44.032]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-t54g4
I1018 17:57:44.032]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-khs8j
I1018 17:57:44.033]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-zg9g7
I1018 17:57:44.033] (B
I1018 17:57:44.142] 
I1018 17:57:44.143] FAIL!
I1018 17:57:44.143] Describe rc
I1018 17:57:44.143]   Expected Match: Name:
I1018 17:57:44.143]   Not found in:
I1018 17:57:44.143] Name:         frontend
I1018 17:57:44.143] Namespace:    namespace-1571421462-16550
I1018 17:57:44.143] Selector:     app=guestbook,tier=frontend
I1018 17:57:44.143] Labels:       app=guestbook
I1018 17:57:44.143]               tier=frontend
I1018 17:57:44.143] Annotations:  <none>
I1018 17:57:44.144] Replicas:     3 current / 3 desired
I1018 17:57:44.144] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1018 17:57:44.144] Pod Template:
I1018 17:57:44.144]   Labels:  app=guestbook
I1018 17:57:44.144]            tier=frontend
I1018 17:57:44.144]   Containers:
I1018 17:57:44.144]    php-redis:
I1018 17:57:44.144]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 20 lines ...
I1018 17:57:44.254] Namespace:    namespace-1571421462-16550
I1018 17:57:44.254] Selector:     app=guestbook,tier=frontend
I1018 17:57:44.254] Labels:       app=guestbook
I1018 17:57:44.254]               tier=frontend
I1018 17:57:44.254] Annotations:  <none>
I1018 17:57:44.255] Replicas:     3 current / 3 desired
I1018 17:57:44.255] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1018 17:57:44.255] Pod Template:
I1018 17:57:44.255]   Labels:  app=guestbook
I1018 17:57:44.255]            tier=frontend
I1018 17:57:44.255]   Containers:
I1018 17:57:44.256]    php-redis:
I1018 17:57:44.256]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I1018 17:57:44.362] Namespace:    namespace-1571421462-16550
I1018 17:57:44.363] Selector:     app=guestbook,tier=frontend
I1018 17:57:44.363] Labels:       app=guestbook
I1018 17:57:44.363]               tier=frontend
I1018 17:57:44.363] Annotations:  <none>
I1018 17:57:44.363] Replicas:     3 current / 3 desired
I1018 17:57:44.363] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1018 17:57:44.363] Pod Template:
I1018 17:57:44.363]   Labels:  app=guestbook
I1018 17:57:44.363]            tier=frontend
I1018 17:57:44.364]   Containers:
I1018 17:57:44.364]    php-redis:
I1018 17:57:44.364]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I1018 17:57:44.482] Namespace:    namespace-1571421462-16550
I1018 17:57:44.482] Selector:     app=guestbook,tier=frontend
I1018 17:57:44.482] Labels:       app=guestbook
I1018 17:57:44.482]               tier=frontend
I1018 17:57:44.482] Annotations:  <none>
I1018 17:57:44.482] Replicas:     3 current / 3 desired
I1018 17:57:44.482] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I1018 17:57:44.483] Pod Template:
I1018 17:57:44.483]   Labels:  app=guestbook
I1018 17:57:44.483]            tier=frontend
I1018 17:57:44.483]   Containers:
I1018 17:57:44.483]    php-redis:
I1018 17:57:44.483]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 16 lines ...
I1018 17:57:44.698] (Breplicationcontroller/frontend scaled
I1018 17:57:44.793] core.sh:1083: Successful get rc frontend {{.spec.replicas}}: 2
I1018 17:57:44.881] (Bcore.sh:1087: Successful get rc frontend {{.spec.replicas}}: 2
I1018 17:57:45.086] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 2
I1018 17:57:45.176] (Bcore.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I1018 17:57:45.263] (Breplicationcontroller/frontend scaled
W1018 17:57:45.364] E1018 17:57:44.447874   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:45.364] E1018 17:57:44.547632   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:45.365] E1018 17:57:44.664966   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:45.366] E1018 17:57:44.698795   53086 replica_set.go:202] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1571421462-16550 /api/v1/namespaces/namespace-1571421462-16550/replicationcontrollers/frontend 14e54e69-9cf8-4c9f-b6ae-c3e4583189c9 1587 2 2019-10-18 17:57:43 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00197deb8 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
W1018 17:57:45.366] I1018 17:57:44.705321   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"14e54e69-9cf8-4c9f-b6ae-c3e4583189c9", APIVersion:"v1", ResourceVersion:"1587", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-zg9g7
W1018 17:57:45.367] E1018 17:57:44.768076   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:45.367] error: Expected replicas to be 3, was 2
W1018 17:57:45.367] I1018 17:57:45.266344   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"14e54e69-9cf8-4c9f-b6ae-c3e4583189c9", APIVersion:"v1", ResourceVersion:"1593", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qh27d
W1018 17:57:45.449] E1018 17:57:45.449105   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:45.545] E1018 17:57:45.543608   53086 replica_set.go:202] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1571421462-16550 /api/v1/namespaces/namespace-1571421462-16550/replicationcontrollers/frontend 14e54e69-9cf8-4c9f-b6ae-c3e4583189c9 1599 4 2019-10-18 17:57:43 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0003f3748 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
W1018 17:57:45.549] E1018 17:57:45.549209   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:45.553] I1018 17:57:45.552493   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"14e54e69-9cf8-4c9f-b6ae-c3e4583189c9", APIVersion:"v1", ResourceVersion:"1599", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-qh27d
I1018 17:57:45.654] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I1018 17:57:45.654] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I1018 17:57:45.655] (Breplicationcontroller/frontend scaled
I1018 17:57:45.655] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I1018 17:57:45.725] (Breplicationcontroller "frontend" deleted
W1018 17:57:45.826] E1018 17:57:45.666362   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:45.826] E1018 17:57:45.770639   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:45.890] I1018 17:57:45.889564   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"redis-master", UID:"8c7afc01-a426-4a62-9efd-5102a1b7b117", APIVersion:"v1", ResourceVersion:"1610", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-zbbx7
I1018 17:57:45.991] replicationcontroller/redis-master created
I1018 17:57:46.053] replicationcontroller/redis-slave created
I1018 17:57:46.149] replicationcontroller/redis-master scaled
I1018 17:57:46.153] replicationcontroller/redis-slave scaled
I1018 17:57:46.253] core.sh:1117: Successful get rc redis-master {{.spec.replicas}}: 4
... skipping 4 lines ...
W1018 17:57:46.535] I1018 17:57:46.059014   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"redis-slave", UID:"5cfc3318-2e67-4268-a1d7-71b39781f78f", APIVersion:"v1", ResourceVersion:"1615", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-45ln6
W1018 17:57:46.536] I1018 17:57:46.151578   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"redis-master", UID:"8c7afc01-a426-4a62-9efd-5102a1b7b117", APIVersion:"v1", ResourceVersion:"1622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-qg7pg
W1018 17:57:46.536] I1018 17:57:46.154937   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"redis-master", UID:"8c7afc01-a426-4a62-9efd-5102a1b7b117", APIVersion:"v1", ResourceVersion:"1622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-bv2lt
W1018 17:57:46.536] I1018 17:57:46.156247   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"redis-slave", UID:"5cfc3318-2e67-4268-a1d7-71b39781f78f", APIVersion:"v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-t6rnj
W1018 17:57:46.536] I1018 17:57:46.157363   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"redis-master", UID:"8c7afc01-a426-4a62-9efd-5102a1b7b117", APIVersion:"v1", ResourceVersion:"1622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-jqbgv
W1018 17:57:46.537] I1018 17:57:46.158903   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"redis-slave", UID:"5cfc3318-2e67-4268-a1d7-71b39781f78f", APIVersion:"v1", ResourceVersion:"1624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-fjvfz
W1018 17:57:46.537] E1018 17:57:46.450141   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:46.552] E1018 17:57:46.551564   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:46.615] I1018 17:57:46.615011   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment", UID:"d737e5db-3ad0-456a-b5b7-b45260c36e8e", APIVersion:"apps/v1", ResourceVersion:"1657", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1018 17:57:46.618] I1018 17:57:46.618048   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-6986c7bc94", UID:"ec20b4e1-d981-4a9a-b199-68a6edd512bc", APIVersion:"apps/v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-flhqg
W1018 17:57:46.622] I1018 17:57:46.621691   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-6986c7bc94", UID:"ec20b4e1-d981-4a9a-b199-68a6edd512bc", APIVersion:"apps/v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-cwm8h
W1018 17:57:46.623] I1018 17:57:46.622211   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-6986c7bc94", UID:"ec20b4e1-d981-4a9a-b199-68a6edd512bc", APIVersion:"apps/v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-clc5r
W1018 17:57:46.668] E1018 17:57:46.668043   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:46.717] I1018 17:57:46.716110   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment", UID:"d737e5db-3ad0-456a-b5b7-b45260c36e8e", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
W1018 17:57:46.722] I1018 17:57:46.721935   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-6986c7bc94", UID:"ec20b4e1-d981-4a9a-b199-68a6edd512bc", APIVersion:"apps/v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-flhqg
W1018 17:57:46.724] I1018 17:57:46.723640   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-6986c7bc94", UID:"ec20b4e1-d981-4a9a-b199-68a6edd512bc", APIVersion:"apps/v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-cwm8h
W1018 17:57:46.773] E1018 17:57:46.772252   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:46.873] deployment.apps/nginx-deployment created
I1018 17:57:46.873] deployment.apps/nginx-deployment scaled
I1018 17:57:46.874] core.sh:1127: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
I1018 17:57:46.894] (Bdeployment.apps "nginx-deployment" deleted
I1018 17:57:46.998] Successful
I1018 17:57:46.999] message:service/expose-test-deployment exposed
I1018 17:57:46.999] has:service/expose-test-deployment exposed
I1018 17:57:47.086] service "expose-test-deployment" deleted
I1018 17:57:47.182] Successful
I1018 17:57:47.182] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I1018 17:57:47.182] See 'kubectl expose -h' for help and examples
I1018 17:57:47.182] has:invalid deployment: no selectors
I1018 17:57:47.357] deployment.apps/nginx-deployment created
W1018 17:57:47.458] I1018 17:57:47.360722   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment", UID:"1fde4987-825c-4579-9eb9-9c81c065e01c", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1018 17:57:47.459] I1018 17:57:47.362944   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-6986c7bc94", UID:"5658e945-54a3-472b-bb11-053bb7885645", APIVersion:"apps/v1", ResourceVersion:"1697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-g6fq4
W1018 17:57:47.459] I1018 17:57:47.365729   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-6986c7bc94", UID:"5658e945-54a3-472b-bb11-053bb7885645", APIVersion:"apps/v1", ResourceVersion:"1697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-zn4l7
W1018 17:57:47.460] I1018 17:57:47.365972   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-6986c7bc94", UID:"5658e945-54a3-472b-bb11-053bb7885645", APIVersion:"apps/v1", ResourceVersion:"1697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-5j55f
W1018 17:57:47.460] E1018 17:57:47.451376   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:47.553] E1018 17:57:47.552741   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:47.654] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I1018 17:57:47.654] (Bservice/nginx-deployment exposed
I1018 17:57:47.662] core.sh:1150: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
I1018 17:57:47.745] (Bdeployment.apps "nginx-deployment" deleted
I1018 17:57:47.755] service "nginx-deployment" deleted
W1018 17:57:47.856] E1018 17:57:47.669415   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:47.856] E1018 17:57:47.774052   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:47.956] I1018 17:57:47.955913   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"f1213954-9f90-445d-992c-7bd8d2b1bcca", APIVersion:"v1", ResourceVersion:"1724", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-b9v74
W1018 17:57:47.959] I1018 17:57:47.958288   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"f1213954-9f90-445d-992c-7bd8d2b1bcca", APIVersion:"v1", ResourceVersion:"1724", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fgklc
W1018 17:57:47.960] I1018 17:57:47.958792   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"f1213954-9f90-445d-992c-7bd8d2b1bcca", APIVersion:"v1", ResourceVersion:"1724", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4zdfd
I1018 17:57:48.060] replicationcontroller/frontend created
I1018 17:57:48.061] core.sh:1157: Successful get rc frontend {{.spec.replicas}}: 3
I1018 17:57:48.144] (Bservice/frontend exposed
... skipping 11 lines ...
I1018 17:57:49.335] service "frontend" deleted
I1018 17:57:49.345] service "frontend-2" deleted
I1018 17:57:49.352] service "frontend-3" deleted
I1018 17:57:49.359] service "frontend-4" deleted
I1018 17:57:49.365] service "frontend-5" deleted
I1018 17:57:49.462] Successful
I1018 17:57:49.463] message:error: cannot expose a Node
I1018 17:57:49.463] has:cannot expose
I1018 17:57:49.559] Successful
I1018 17:57:49.559] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I1018 17:57:49.560] has:metadata.name: Invalid value
I1018 17:57:49.655] Successful
I1018 17:57:49.656] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I1018 17:57:49.656] has:kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I1018 17:57:49.742] service "kubernetes-serve-hostname-testing-sixty-three-characters-in-len" deleted
W1018 17:57:49.843] E1018 17:57:48.452919   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:49.843] E1018 17:57:48.554309   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:49.843] E1018 17:57:48.670870   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:49.844] E1018 17:57:48.775907   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:49.844] E1018 17:57:49.453929   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:49.844] E1018 17:57:49.555469   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:49.844] E1018 17:57:49.671747   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:49.844] E1018 17:57:49.778009   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:49.945] Successful
I1018 17:57:49.945] message:service/etcd-server exposed
I1018 17:57:49.946] has:etcd-server exposed
I1018 17:57:49.946] core.sh:1208: Successful get service etcd-server {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: port-1 2380
I1018 17:57:50.031] (Bcore.sh:1209: Successful get service etcd-server {{(index .spec.ports 1).name}} {{(index .spec.ports 1).port}}: port-2 2379
I1018 17:57:50.114] (Bservice "etcd-server" deleted
I1018 17:57:50.213] core.sh:1215: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1018 17:57:50.290] (Breplicationcontroller "frontend" deleted
I1018 17:57:50.395] core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:50.492] (Bcore.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:50.673] (Breplicationcontroller/frontend created
W1018 17:57:50.774] E1018 17:57:50.455345   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:50.775] E1018 17:57:50.557016   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:50.775] E1018 17:57:50.673536   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:50.775] I1018 17:57:50.675346   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"8821bd2c-8a7e-443e-9959-5e8dc3c28f41", APIVersion:"v1", ResourceVersion:"1786", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hns4f
W1018 17:57:50.776] I1018 17:57:50.678192   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"8821bd2c-8a7e-443e-9959-5e8dc3c28f41", APIVersion:"v1", ResourceVersion:"1786", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nl2m2
W1018 17:57:50.776] I1018 17:57:50.679434   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"8821bd2c-8a7e-443e-9959-5e8dc3c28f41", APIVersion:"v1", ResourceVersion:"1786", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pfwl4
W1018 17:57:50.779] E1018 17:57:50.779140   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:50.874] I1018 17:57:50.873564   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"redis-slave", UID:"7155a7bf-6923-432a-9479-723354f94508", APIVersion:"v1", ResourceVersion:"1795", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-rjvrj
W1018 17:57:50.877] I1018 17:57:50.876932   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"redis-slave", UID:"7155a7bf-6923-432a-9479-723354f94508", APIVersion:"v1", ResourceVersion:"1795", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-pltqc
I1018 17:57:50.978] replicationcontroller/redis-slave created
I1018 17:57:50.981] core.sh:1228: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I1018 17:57:51.078] (Bcore.sh:1232: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I1018 17:57:51.159] (Breplicationcontroller "frontend" deleted
I1018 17:57:51.164] replicationcontroller "redis-slave" deleted
I1018 17:57:51.274] core.sh:1236: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:51.373] (Bcore.sh:1240: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:51.534] (Breplicationcontroller/frontend created
W1018 17:57:51.635] E1018 17:57:51.456657   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:51.636] I1018 17:57:51.538512   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"f07868a3-9981-4f4c-9a37-04afee25d25f", APIVersion:"v1", ResourceVersion:"1815", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dsxsl
W1018 17:57:51.636] I1018 17:57:51.541837   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"f07868a3-9981-4f4c-9a37-04afee25d25f", APIVersion:"v1", ResourceVersion:"1815", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-n5v4x
W1018 17:57:51.637] I1018 17:57:51.542382   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1571421462-16550", Name:"frontend", UID:"f07868a3-9981-4f4c-9a37-04afee25d25f", APIVersion:"v1", ResourceVersion:"1815", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-x54xn
W1018 17:57:51.637] E1018 17:57:51.557891   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:51.675] E1018 17:57:51.674932   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:51.778] core.sh:1243: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I1018 17:57:51.779] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I1018 17:57:51.850] core.sh:1246: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I1018 17:57:51.931] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W1018 17:57:52.031] E1018 17:57:51.781018   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:52.132] horizontalpodautoscaler.autoscaling/frontend autoscaled
I1018 17:57:52.138] core.sh:1250: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1018 17:57:52.229] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W1018 17:57:52.329] Error: required flag(s) "max" not set
W1018 17:57:52.329] 
W1018 17:57:52.330] 
W1018 17:57:52.330] Examples:
W1018 17:57:52.330]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W1018 17:57:52.330]   kubectl autoscale deployment foo --min=2 --max=10
W1018 17:57:52.330]   
... skipping 54 lines ...
I1018 17:57:52.564]           limits:
I1018 17:57:52.564]             cpu: 300m
I1018 17:57:52.564]           requests:
I1018 17:57:52.564]             cpu: 300m
I1018 17:57:52.565]       terminationGracePeriodSeconds: 0
I1018 17:57:52.565] status: {}
W1018 17:57:52.665] E1018 17:57:52.458430   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:52.666] E1018 17:57:52.559833   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:52.666] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
W1018 17:57:52.676] E1018 17:57:52.676162   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:52.783] E1018 17:57:52.782394   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:52.816] I1018 17:57:52.815790   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources", UID:"a38f082e-3c85-46c0-848f-d5dcd50cd457", APIVersion:"apps/v1", ResourceVersion:"1835", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-67f8cfff5 to 3
W1018 17:57:52.821] I1018 17:57:52.820649   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources-67f8cfff5", UID:"38f4fe82-dda2-4d56-b048-3f8223441101", APIVersion:"apps/v1", ResourceVersion:"1836", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-r4jmk
W1018 17:57:52.823] I1018 17:57:52.822551   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources-67f8cfff5", UID:"38f4fe82-dda2-4d56-b048-3f8223441101", APIVersion:"apps/v1", ResourceVersion:"1836", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-qh5bs
W1018 17:57:52.825] I1018 17:57:52.824528   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources-67f8cfff5", UID:"38f4fe82-dda2-4d56-b048-3f8223441101", APIVersion:"apps/v1", ResourceVersion:"1836", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-m4ct7
I1018 17:57:52.925] deployment.apps/nginx-deployment-resources created
I1018 17:57:52.926] core.sh:1265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 2 lines ...
I1018 17:57:53.191] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
I1018 17:57:53.291] core.sh:1270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I1018 17:57:53.384] (Bcore.sh:1271: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I1018 17:57:53.572] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W1018 17:57:53.673] I1018 17:57:53.194814   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources", UID:"a38f082e-3c85-46c0-848f-d5dcd50cd457", APIVersion:"apps/v1", ResourceVersion:"1849", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-55c547f795 to 1
W1018 17:57:53.674] I1018 17:57:53.198670   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources-55c547f795", UID:"a8f97883-f819-4f9d-9bdc-dcc3091789b9", APIVersion:"apps/v1", ResourceVersion:"1850", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-55c547f795-4qzf5
W1018 17:57:53.674] E1018 17:57:53.459754   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:53.674] error: unable to find container named redis
W1018 17:57:53.675] E1018 17:57:53.561538   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:53.675] I1018 17:57:53.581119   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources", UID:"a38f082e-3c85-46c0-848f-d5dcd50cd457", APIVersion:"apps/v1", ResourceVersion:"1860", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 2
W1018 17:57:53.675] I1018 17:57:53.586303   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources-67f8cfff5", UID:"38f4fe82-dda2-4d56-b048-3f8223441101", APIVersion:"apps/v1", ResourceVersion:"1864", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-r4jmk
W1018 17:57:53.676] I1018 17:57:53.587744   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources", UID:"a38f082e-3c85-46c0-848f-d5dcd50cd457", APIVersion:"apps/v1", ResourceVersion:"1863", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6d86564b45 to 1
W1018 17:57:53.676] I1018 17:57:53.591675   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources-6d86564b45", UID:"c3001604-e7a5-4b16-b98c-7963eee88839", APIVersion:"apps/v1", ResourceVersion:"1868", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6d86564b45-b6wqf
W1018 17:57:53.678] E1018 17:57:53.677744   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:53.779] core.sh:1276: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1018 17:57:53.779] (Bcore.sh:1277: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I1018 17:57:53.861] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
I1018 17:57:53.966] core.sh:1280: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1018 17:57:54.056] (Bcore.sh:1281: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1018 17:57:54.148] (Bcore.sh:1282: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 71 lines ...
I1018 17:57:54.247]     status: "True"
I1018 17:57:54.247]     type: Progressing
I1018 17:57:54.247]   observedGeneration: 4
I1018 17:57:54.247]   replicas: 4
I1018 17:57:54.247]   unavailableReplicas: 4
I1018 17:57:54.247]   updatedReplicas: 1
W1018 17:57:54.348] E1018 17:57:53.784382   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:54.349] I1018 17:57:53.872205   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources", UID:"a38f082e-3c85-46c0-848f-d5dcd50cd457", APIVersion:"apps/v1", ResourceVersion:"1880", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 1
W1018 17:57:54.349] I1018 17:57:53.879929   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources", UID:"a38f082e-3c85-46c0-848f-d5dcd50cd457", APIVersion:"apps/v1", ResourceVersion:"1882", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c478d4fdb to 1
W1018 17:57:54.350] I1018 17:57:53.882184   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources-67f8cfff5", UID:"38f4fe82-dda2-4d56-b048-3f8223441101", APIVersion:"apps/v1", ResourceVersion:"1884", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-qh5bs
W1018 17:57:54.350] I1018 17:57:53.882548   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421462-16550", Name:"nginx-deployment-resources-6c478d4fdb", UID:"e60e8a6c-f396-4119-8c08-efde31f42bf9", APIVersion:"apps/v1", ResourceVersion:"1887", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c478d4fdb-htvsm
W1018 17:57:54.350] error: you must specify resources by --filename when --local is set.
W1018 17:57:54.351] Example resource specifications include:
W1018 17:57:54.351]    '-f rsrc.yaml'
W1018 17:57:54.351]    '--filename=rsrc.json'
I1018 17:57:54.452] core.sh:1286: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I1018 17:57:54.500] (Bcore.sh:1287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I1018 17:57:54.591] (Bcore.sh:1288: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 23 lines ...
I1018 17:57:55.635] (BSuccessful
I1018 17:57:55.635] message:10
I1018 17:57:55.635] has:10
I1018 17:57:55.722] Successful
I1018 17:57:55.722] message:apps/v1
I1018 17:57:55.723] has:apps/v1
W1018 17:57:55.823] E1018 17:57:54.461161   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:55.824] E1018 17:57:54.562892   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:55.824] E1018 17:57:54.681007   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:55.825] E1018 17:57:54.786380   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:55.825] I1018 17:57:55.018488   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"test-nginx-extensions", UID:"69dc25be-a3f4-45be-8b3e-0fb79d7fe5b0", APIVersion:"apps/v1", ResourceVersion:"1916", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-5559c76db7 to 1
W1018 17:57:55.826] I1018 17:57:55.024293   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"test-nginx-extensions-5559c76db7", UID:"d868b157-fd07-4231-bc68-a50aeb3836ff", APIVersion:"apps/v1", ResourceVersion:"1917", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-5559c76db7-57x8s
W1018 17:57:55.826] I1018 17:57:55.456273   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"test-nginx-apps", UID:"36617ed8-5f3f-434d-8412-876d1b4e1066", APIVersion:"apps/v1", ResourceVersion:"1931", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-apps-79b9bd9585 to 1
W1018 17:57:55.827] I1018 17:57:55.459670   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"test-nginx-apps-79b9bd9585", UID:"065a59ab-87c2-40b2-8557-d1efc12155cb", APIVersion:"apps/v1", ResourceVersion:"1932", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-apps-79b9bd9585-lgngg
W1018 17:57:55.827] E1018 17:57:55.463123   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:55.827] E1018 17:57:55.564290   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:55.828] E1018 17:57:55.683347   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:55.828] E1018 17:57:55.787878   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:55.928] 
I1018 17:57:55.929] FAIL!
I1018 17:57:55.929] Describe rs
I1018 17:57:55.930]   Expected Match: Name:
I1018 17:57:55.930]   Not found in:
I1018 17:57:55.930] Name:           test-nginx-apps-79b9bd9585
I1018 17:57:55.930] Namespace:      namespace-1571421474-17944
I1018 17:57:55.930] Selector:       app=test-nginx-apps,pod-template-hash=79b9bd9585
I1018 17:57:55.931] Labels:         app=test-nginx-apps
I1018 17:57:55.931]                 pod-template-hash=79b9bd9585
I1018 17:57:55.931] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I1018 17:57:55.931]                 deployment.kubernetes.io/max-replicas: 2
I1018 17:57:55.931]                 deployment.kubernetes.io/revision: 1
I1018 17:57:55.931] Controlled By:  Deployment/test-nginx-apps
I1018 17:57:55.931] Replicas:       1 current / 1 desired
I1018 17:57:55.931] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I1018 17:57:55.931] Pod Template:
I1018 17:57:55.932]   Labels:  app=test-nginx-apps
I1018 17:57:55.932]            pod-template-hash=79b9bd9585
I1018 17:57:55.932]   Containers:
I1018 17:57:55.932]    nginx:
I1018 17:57:55.932]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 7 lines ...
I1018 17:57:55.932]   ----    ------            ----  ----                   -------
I1018 17:57:55.933]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: test-nginx-apps-79b9bd9585-lgngg
I1018 17:57:55.933] (B
I1018 17:57:55.933] 206 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/apps.sh
I1018 17:57:55.933] (B
I1018 17:57:55.933] 
I1018 17:57:55.933] FAIL!
I1018 17:57:55.933] Describe pods
I1018 17:57:55.933]   Expected Match: Name:
I1018 17:57:55.933]   Not found in:
I1018 17:57:55.933] Name:           test-nginx-apps-79b9bd9585-lgngg
I1018 17:57:55.933] Namespace:      namespace-1571421474-17944
I1018 17:57:55.933] Priority:       0
... skipping 30 lines ...
I1018 17:57:56.711] apps.sh:228: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: deployment-with-unixuserid:
I1018 17:57:56.790] (Bdeployment.apps "deployment-with-unixuserid" deleted
I1018 17:57:56.891] apps.sh:235: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:57.041] (Bdeployment.apps/nginx-deployment created
W1018 17:57:57.142] I1018 17:57:56.189511   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx-with-command", UID:"9a6c0f34-3441-4333-801b-0389180c27e7", APIVersion:"apps/v1", ResourceVersion:"1945", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-with-command-757c6f58dd to 1
W1018 17:57:57.142] I1018 17:57:56.192574   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-with-command-757c6f58dd", UID:"7e70dc7e-f46d-49c3-9234-f82f5644c598", APIVersion:"apps/v1", ResourceVersion:"1946", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-with-command-757c6f58dd-4m2sm
W1018 17:57:57.142] E1018 17:57:56.464457   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:57.143] E1018 17:57:56.566583   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:57.143] I1018 17:57:56.613895   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"deployment-with-unixuserid", UID:"5134cb58-9077-45dc-b6f1-fc9cc3dcbdbb", APIVersion:"apps/v1", ResourceVersion:"1960", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-with-unixuserid-8fcdfc94f to 1
W1018 17:57:57.144] I1018 17:57:56.616560   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"deployment-with-unixuserid-8fcdfc94f", UID:"e2736eea-55ff-43e5-9c77-bad8da9ae83d", APIVersion:"apps/v1", ResourceVersion:"1961", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-with-unixuserid-8fcdfc94f-9h6qr
W1018 17:57:57.144] E1018 17:57:56.685307   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:57.144] E1018 17:57:56.789944   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:57.145] I1018 17:57:57.043586   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment", UID:"ebb14ee7-2e02-4848-94f6-b2462bd3004e", APIVersion:"apps/v1", ResourceVersion:"1974", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1018 17:57:57.145] I1018 17:57:57.047382   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-6986c7bc94", UID:"6c27e29b-75a5-4cc9-8b5a-a00f66e80754", APIVersion:"apps/v1", ResourceVersion:"1975", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-6d4j2
W1018 17:57:57.146] I1018 17:57:57.050147   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-6986c7bc94", UID:"6c27e29b-75a5-4cc9-8b5a-a00f66e80754", APIVersion:"apps/v1", ResourceVersion:"1975", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-zfq54
W1018 17:57:57.146] I1018 17:57:57.050461   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-6986c7bc94", UID:"6c27e29b-75a5-4cc9-8b5a-a00f66e80754", APIVersion:"apps/v1", ResourceVersion:"1975", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-5275t
I1018 17:57:57.247] apps.sh:239: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 3
I1018 17:57:57.247] (Bdeployment.apps "nginx-deployment" deleted
I1018 17:57:57.346] apps.sh:242: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:57.440] (Bapps.sh:246: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:57.535] (Bapps.sh:247: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:57.618] (Bdeployment.apps/nginx-deployment created
I1018 17:57:57.721] apps.sh:251: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
I1018 17:57:57.805] (Bdeployment.apps "nginx-deployment" deleted
W1018 17:57:57.906] E1018 17:57:57.466345   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:57.906] E1018 17:57:57.567982   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:57.907] I1018 17:57:57.621723   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment", UID:"c51becdf-d9bb-4a2f-9f4d-88c287b40341", APIVersion:"apps/v1", ResourceVersion:"1997", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7f6fc565b9 to 1
W1018 17:57:57.907] I1018 17:57:57.624403   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-7f6fc565b9", UID:"37a06c83-c434-476c-9638-e4b3e5d75418", APIVersion:"apps/v1", ResourceVersion:"1998", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7f6fc565b9-dvcxn
W1018 17:57:57.907] E1018 17:57:57.687205   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:57.908] E1018 17:57:57.791662   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:58.008] apps.sh:256: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:58.023] (Bapps.sh:257: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
I1018 17:57:58.192] (Breplicaset.apps "nginx-deployment-7f6fc565b9" deleted
I1018 17:57:58.292] apps.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:58.484] (Bdeployment.apps/nginx-deployment created
W1018 17:57:58.585] E1018 17:57:58.467881   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:58.586] I1018 17:57:58.487302   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment", UID:"df0beac0-e2dc-43df-8b75-34817ebda54d", APIVersion:"apps/v1", ResourceVersion:"2015", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W1018 17:57:58.586] I1018 17:57:58.491368   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-6986c7bc94", UID:"8932076f-7ff7-4d42-9b45-acec36231be3", APIVersion:"apps/v1", ResourceVersion:"2016", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-w7cp9
W1018 17:57:58.586] I1018 17:57:58.494523   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-6986c7bc94", UID:"8932076f-7ff7-4d42-9b45-acec36231be3", APIVersion:"apps/v1", ResourceVersion:"2016", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-b5ggs
W1018 17:57:58.587] I1018 17:57:58.494912   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-6986c7bc94", UID:"8932076f-7ff7-4d42-9b45-acec36231be3", APIVersion:"apps/v1", ResourceVersion:"2016", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-rl7ts
W1018 17:57:58.587] E1018 17:57:58.569897   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:58.688] apps.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
I1018 17:57:58.689] (Bhorizontalpodautoscaler.autoscaling/nginx-deployment autoscaled
W1018 17:57:58.790] E1018 17:57:58.689218   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:58.793] E1018 17:57:58.792919   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:58.894] apps.sh:271: Successful get hpa nginx-deployment {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I1018 17:57:58.897] (Bhorizontalpodautoscaler.autoscaling "nginx-deployment" deleted
I1018 17:57:58.992] deployment.apps "nginx-deployment" deleted
I1018 17:57:59.085] apps.sh:279: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:57:59.279] (Bdeployment.apps/nginx created
W1018 17:57:59.380] I1018 17:57:59.283198   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx", UID:"04823e71-7eda-4995-9d61-48914aed31cb", APIVersion:"apps/v1", ResourceVersion:"2039", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W1018 17:57:59.381] I1018 17:57:59.290194   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-f87d999f7", UID:"4dc1db0d-a9af-42c0-ac67-0751d981c427", APIVersion:"apps/v1", ResourceVersion:"2040", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-zs5cq
W1018 17:57:59.382] I1018 17:57:59.293779   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-f87d999f7", UID:"4dc1db0d-a9af-42c0-ac67-0751d981c427", APIVersion:"apps/v1", ResourceVersion:"2040", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-qq9f6
W1018 17:57:59.383] I1018 17:57:59.297116   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-f87d999f7", UID:"4dc1db0d-a9af-42c0-ac67-0751d981c427", APIVersion:"apps/v1", ResourceVersion:"2040", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-txbrn
W1018 17:57:59.470] E1018 17:57:59.469345   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:57:59.570] apps.sh:283: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I1018 17:57:59.571] (Bapps.sh:284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1018 17:57:59.602] (Bdeployment.apps/nginx skipped rollback (current template already matches revision 1)
W1018 17:57:59.703] E1018 17:57:59.571497   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:59.703] E1018 17:57:59.691394   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:59.796] E1018 17:57:59.795367   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:57:59.878] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W1018 17:57:59.886] I1018 17:57:59.885738   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx", UID:"04823e71-7eda-4995-9d61-48914aed31cb", APIVersion:"apps/v1", ResourceVersion:"2054", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-78487f9fd7 to 1
W1018 17:57:59.889] I1018 17:57:59.888902   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-78487f9fd7", UID:"46ea3742-b5d6-4ee3-9555-2c5b42ae12d1", APIVersion:"apps/v1", ResourceVersion:"2055", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-78487f9fd7-kzmqt
I1018 17:57:59.990] apps.sh:287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1018 17:57:59.991] (Bdeployment.apps/nginx configured
I1018 17:57:59.991] apps.sh:290: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1018 17:58:00.090] (B    Image:	k8s.gcr.io/nginx:test-cmd
I1018 17:58:00.192] apps.sh:293: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1018 17:58:00.298] (Bdeployment.apps/nginx rolled back
W1018 17:58:00.472] E1018 17:58:00.471102   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:00.573] E1018 17:58:00.573150   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:00.693] E1018 17:58:00.692692   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:00.797] E1018 17:58:00.797119   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:58:01.416] apps.sh:297: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1018 17:58:01.620] (Bapps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1018 17:58:01.742] (Bdeployment.apps/nginx rolled back
W1018 17:58:01.843] E1018 17:58:01.472327   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:01.843] error: unable to find specified revision 1000000 in history
W1018 17:58:01.843] E1018 17:58:01.574535   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:01.844] E1018 17:58:01.694010   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:01.844] E1018 17:58:01.800406   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:02.474] E1018 17:58:02.473576   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:02.576] E1018 17:58:02.575997   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:02.696] E1018 17:58:02.695688   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:02.802] E1018 17:58:02.802006   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:58:02.903] apps.sh:304: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I1018 17:58:02.932] (Bdeployment.apps/nginx paused
W1018 17:58:03.034] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
W1018 17:58:03.125] error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
I1018 17:58:03.225] deployment.apps/nginx resumed
I1018 17:58:03.332] deployment.apps/nginx rolled back
W1018 17:58:03.475] E1018 17:58:03.475076   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:58:03.576]     deployment.kubernetes.io/revision-history: 1,3
W1018 17:58:03.677] E1018 17:58:03.577654   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:03.698] E1018 17:58:03.697349   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:03.725] error: desired revision (3) is different from the running revision (5)
W1018 17:58:03.804] E1018 17:58:03.804050   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:03.826] I1018 17:58:03.826133   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx", UID:"04823e71-7eda-4995-9d61-48914aed31cb", APIVersion:"apps/v1", ResourceVersion:"2084", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-78487f9fd7 to 0
W1018 17:58:03.832] I1018 17:58:03.831997   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-78487f9fd7", UID:"46ea3742-b5d6-4ee3-9555-2c5b42ae12d1", APIVersion:"apps/v1", ResourceVersion:"2088", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-78487f9fd7-kzmqt
W1018 17:58:03.833] I1018 17:58:03.832587   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx", UID:"04823e71-7eda-4995-9d61-48914aed31cb", APIVersion:"apps/v1", ResourceVersion:"2086", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7cd849c974 to 1
W1018 17:58:03.838] I1018 17:58:03.837235   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-7cd849c974", UID:"a04c7b66-2554-4125-988e-ef1fbb132a8f", APIVersion:"apps/v1", ResourceVersion:"2092", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7cd849c974-c8hvf
I1018 17:58:03.938] deployment.apps/nginx restarted
W1018 17:58:04.477] E1018 17:58:04.477088   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:04.580] E1018 17:58:04.579416   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:04.699] E1018 17:58:04.698938   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:04.806] E1018 17:58:04.805444   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:58:05.013] Successful
I1018 17:58:05.014] message:apiVersion: apps/v1
I1018 17:58:05.014] kind: ReplicaSet
I1018 17:58:05.014] metadata:
I1018 17:58:05.014]   annotations:
I1018 17:58:05.014]     deployment.kubernetes.io/desired-replicas: "3"
... skipping 77 lines ...
I1018 17:58:07.698] (Bapps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I1018 17:58:07.783] (Bdeployment.apps "nginx-deployment" deleted
W1018 17:58:07.884] I1018 17:58:05.171410   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx2", UID:"7ed1e2a8-8057-4cf9-a27d-138c963c558f", APIVersion:"apps/v1", ResourceVersion:"2104", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-57b7865cd9 to 3
W1018 17:58:07.884] I1018 17:58:05.174402   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx2-57b7865cd9", UID:"dc0cbb3a-cdb2-404f-a05e-a3ae596771c1", APIVersion:"apps/v1", ResourceVersion:"2105", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-76s2m
W1018 17:58:07.885] I1018 17:58:05.177075   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx2-57b7865cd9", UID:"dc0cbb3a-cdb2-404f-a05e-a3ae596771c1", APIVersion:"apps/v1", ResourceVersion:"2105", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-kn7nz
W1018 17:58:07.886] I1018 17:58:05.178042   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx2-57b7865cd9", UID:"dc0cbb3a-cdb2-404f-a05e-a3ae596771c1", APIVersion:"apps/v1", ResourceVersion:"2105", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-dbb9x
W1018 17:58:07.886] E1018 17:58:05.478475   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:07.886] E1018 17:58:05.580693   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:07.886] I1018 17:58:05.612200   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment", UID:"93dbfbf9-db7c-4ac5-a66d-0210e94dcdd6", APIVersion:"apps/v1", ResourceVersion:"2139", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
W1018 17:58:07.887] I1018 17:58:05.615310   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-598d4d68b4", UID:"9b8ac13f-dd82-41de-8016-304346a4e69b", APIVersion:"apps/v1", ResourceVersion:"2140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-n4cfl
W1018 17:58:07.887] I1018 17:58:05.618347   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-598d4d68b4", UID:"9b8ac13f-dd82-41de-8016-304346a4e69b", APIVersion:"apps/v1", ResourceVersion:"2140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-dwnql
W1018 17:58:07.887] I1018 17:58:05.618889   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-598d4d68b4", UID:"9b8ac13f-dd82-41de-8016-304346a4e69b", APIVersion:"apps/v1", ResourceVersion:"2140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-cxqrp
W1018 17:58:07.888] E1018 17:58:05.700022   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:07.888] E1018 17:58:05.807076   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:07.888] I1018 17:58:05.994705   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment", UID:"93dbfbf9-db7c-4ac5-a66d-0210e94dcdd6", APIVersion:"apps/v1", ResourceVersion:"2153", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-59df9b5f5b to 1
W1018 17:58:07.888] I1018 17:58:05.999015   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-59df9b5f5b", UID:"f0a35581-9b77-4654-8066-333fbe1327d8", APIVersion:"apps/v1", ResourceVersion:"2154", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-59df9b5f5b-nt5jq
W1018 17:58:07.889] error: unable to find container named "redis"
W1018 17:58:07.889] E1018 17:58:06.479662   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:07.889] E1018 17:58:06.583804   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:07.889] E1018 17:58:06.701200   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:07.889] I1018 17:58:06.749026   53086 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1571421462-16550
W1018 17:58:07.889] E1018 17:58:06.808422   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:07.890] I1018 17:58:07.233865   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment", UID:"93dbfbf9-db7c-4ac5-a66d-0210e94dcdd6", APIVersion:"apps/v1", ResourceVersion:"2172", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 2
W1018 17:58:07.890] I1018 17:58:07.239301   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-598d4d68b4", UID:"9b8ac13f-dd82-41de-8016-304346a4e69b", APIVersion:"apps/v1", ResourceVersion:"2176", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-dwnql
W1018 17:58:07.890] I1018 17:58:07.240902   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment", UID:"93dbfbf9-db7c-4ac5-a66d-0210e94dcdd6", APIVersion:"apps/v1", ResourceVersion:"2174", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7d758dbc54 to 1
W1018 17:58:07.891] I1018 17:58:07.245074   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-7d758dbc54", UID:"979f4386-8df0-434b-a7f1-beb007447cc3", APIVersion:"apps/v1", ResourceVersion:"2180", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7d758dbc54-wlh8j
W1018 17:58:07.891] E1018 17:58:07.480975   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:07.891] E1018 17:58:07.585152   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:07.891] E1018 17:58:07.703021   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:07.892] E1018 17:58:07.810014   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1018 17:58:07.992] apps.sh:371: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I1018 17:58:08.050] (Bdeployment.apps/nginx-deployment created
W1018 17:58:08.152] I1018 17:58:08.053420   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment", UID:"408624fb-1c35-41f3-9595-72f01f15c903", APIVersion:"apps/v1", ResourceVersion:"2205", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
W1018 17:58:08.153] I1018 17:58:08.056464   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-598d4d68b4", UID:"c530dac0-d41a-49b9-ab01-4f3c8018ed19", APIVersion:"apps/v1", ResourceVersion:"2206", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-56zzq
W1018 17:58:08.153] I1018 17:58:08.059016   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-598d4d68b4", UID:"c530dac0-d41a-49b9-ab01-4f3c8018ed19", APIVersion:"apps/v1", ResourceVersion:"2206", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-vcnh4
W1018 17:58:08.154] I1018 17:58:08.060819   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-598d4d68b4", UID:"c530dac0-d41a-49b9-ab01-4f3c8018ed19", APIVersion:"apps/v1", ResourceVersion:"2206", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-bnz5h
... skipping 5 lines ...
I1018 17:58:08.771] (Bdeployment.apps/nginx-deployment env updated
I1018 17:58:08.878] apps.sh:383: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2
I1018 17:58:08.968] (Bapps.sh:385: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1
I1018 17:58:09.061] (Bdeployment.apps/nginx-deployment env updated
I1018 17:58:09.164] apps.sh:389: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 2
I1018 17:58:09.263] (Bdeployment.apps/nginx-deployment env updated
W1018 17:58:09.364] E1018 17:58:08.482398   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:09.364] E1018 17:58:08.586185   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:09.365] E1018 17:58:08.704720   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:09.365] I1018 17:58:08.775697   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment", UID:"408624fb-1c35-41f3-9595-72f01f15c903", APIVersion:"apps/v1", ResourceVersion:"2221", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6b9f7756b4 to 1
W1018 17:58:09.366] I1018 17:58:08.780511   53086 event.go:262] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment-6b9f7756b4", UID:"a2d03ca7-2eab-4b38-8fa9-3ec3f9684763", APIVersion:"apps/v1", ResourceVersion:"2222", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6b9f7756b4-fjjrq
W1018 17:58:09.366] E1018 17:58:08.811499   53086 reflector.go:153] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W1018 17:58:09.367] I1018 17:58:09.069904   53086 event.go:262] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1571421474-17944", Name:"nginx-deployment", UID:"408624fb-1c35-41f3-9595-72f01f15c903", APIVersion:"apps/v1", ResourceVersion:"2231", FieldPath:""}): type: 'N