This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2862 succeeded
Started2019-09-15 13:16
Elapsed27m29s
Revision
Buildergke-prow-ssd-pool-1a225945-5dqn
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/4f01a5fa-7de6-4bac-a297-1f6f5dee806c/targets/test'}}
podef681cea-d7ba-11e9-9f18-22ab134e4c57
resultstorehttps://source.cloud.google.com/results/invocations/4f01a5fa-7de6-4bac-a297-1f6f5dee806c/targets/test
infra-commite1cbc3ccd
podef681cea-d7ba-11e9-9f18-22ab134e4c57
repok8s.io/kubernetes
repo-commitba07527278ef2cde9c27886ec3333cfef472112a
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0915 13:40:02.807380  108773 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0915 13:40:02.807407  108773 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0915 13:40:02.807421  108773 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0915 13:40:02.807431  108773 master.go:259] Using reconciler: 
I0915 13:40:02.810624  108773 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.810989  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.811151  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.812512  108773 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0915 13:40:02.812611  108773 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0915 13:40:02.812702  108773 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.813109  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.813132  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.814043  108773 store.go:1342] Monitoring events count at <storage-prefix>//events
I0915 13:40:02.814076  108773 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.814210  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.814227  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.814312  108773 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0915 13:40:02.814846  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.816817  108773 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0915 13:40:02.816854  108773 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.816964  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.816983  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.817055  108773 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0915 13:40:02.817209  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.818044  108773 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0915 13:40:02.818292  108773 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.818457  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.818478  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.818574  108773 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0915 13:40:02.818804  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.819949  108773 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0915 13:40:02.820167  108773 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.820271  108773 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0915 13:40:02.820308  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.820328  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.820421  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.821899  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.822594  108773 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0915 13:40:02.822787  108773 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.822941  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.822965  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.823035  108773 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0915 13:40:02.824625  108773 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0915 13:40:02.824676  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.824802  108773 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0915 13:40:02.824825  108773 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.824967  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.824991  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.826633  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.826976  108773 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0915 13:40:02.827065  108773 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0915 13:40:02.827205  108773 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.827326  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.827344  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.829137  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.829767  108773 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0915 13:40:02.829954  108773 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.830093  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.830117  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.830197  108773 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0915 13:40:02.832216  108773 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0915 13:40:02.832466  108773 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.832612  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.832655  108773 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0915 13:40:02.832726  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.834001  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.834745  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.835041  108773 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0915 13:40:02.835449  108773 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0915 13:40:02.835671  108773 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.835826  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.835848  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.836898  108773 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0915 13:40:02.837089  108773 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.837249  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.837272  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.837355  108773 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0915 13:40:02.838552  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.839460  108773 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0915 13:40:02.839639  108773 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.839787  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.839814  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.839894  108773 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0915 13:40:02.841538  108773 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0915 13:40:02.841933  108773 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.842124  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.842145  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.842256  108773 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0915 13:40:02.843703  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.843729  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.845610  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.845664  108773 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.845846  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.845867  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.846127  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.847448  108773 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0915 13:40:02.847475  108773 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0915 13:40:02.847525  108773 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0915 13:40:02.847985  108773 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.848199  108773 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.848594  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.849403  108773 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.850764  108773 watch_cache.go:405] Replace watchCache (rev: 30584) 
I0915 13:40:02.851279  108773 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.852605  108773 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.853961  108773 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.854512  108773 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.854718  108773 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.855024  108773 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.856166  108773 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.856875  108773 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.857184  108773 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.858139  108773 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.858781  108773 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.859432  108773 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.859737  108773 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.860808  108773 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.861094  108773 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.861699  108773 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.861956  108773 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.862247  108773 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.863312  108773 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.863666  108773 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.864801  108773 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.865154  108773 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.866456  108773 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.867386  108773 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.867761  108773 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.868530  108773 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.869410  108773 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.869770  108773 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.871188  108773 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.873403  108773 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.874194  108773 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.875623  108773 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.876006  108773 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.876179  108773 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0915 13:40:02.876228  108773 master.go:461] Enabling API group "authentication.k8s.io".
I0915 13:40:02.876251  108773 master.go:461] Enabling API group "authorization.k8s.io".
I0915 13:40:02.876512  108773 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.876784  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.876834  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.877963  108773 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0915 13:40:02.878121  108773 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0915 13:40:02.878230  108773 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.879103  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.879577  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.879372  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.881667  108773 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0915 13:40:02.881736  108773 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0915 13:40:02.881904  108773 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.883026  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.883395  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.883429  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.884836  108773 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0915 13:40:02.884871  108773 master.go:461] Enabling API group "autoscaling".
I0915 13:40:02.885105  108773 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.885299  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.885333  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.885461  108773 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0915 13:40:02.886993  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.889119  108773 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0915 13:40:02.889338  108773 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0915 13:40:02.889426  108773 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.889985  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.890561  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.890700  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.892987  108773 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0915 13:40:02.893015  108773 master.go:461] Enabling API group "batch".
I0915 13:40:02.893087  108773 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0915 13:40:02.893223  108773 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.893402  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.893431  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.894761  108773 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0915 13:40:02.894796  108773 master.go:461] Enabling API group "certificates.k8s.io".
I0915 13:40:02.895006  108773 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.895137  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.895157  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.895243  108773 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0915 13:40:02.896264  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.896406  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.897482  108773 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0915 13:40:02.897654  108773 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0915 13:40:02.897680  108773 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.897843  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.897864  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.898810  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.900251  108773 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0915 13:40:02.900276  108773 master.go:461] Enabling API group "coordination.k8s.io".
I0915 13:40:02.900292  108773 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0915 13:40:02.900387  108773 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0915 13:40:02.900490  108773 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.900666  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.900689  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.901158  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.901936  108773 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0915 13:40:02.901996  108773 master.go:461] Enabling API group "extensions".
I0915 13:40:02.902211  108773 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.902398  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.902421  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.902537  108773 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0915 13:40:02.904104  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.904673  108773 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0915 13:40:02.904843  108773 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.905004  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.905024  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.905128  108773 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0915 13:40:02.906776  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.908349  108773 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0915 13:40:02.908596  108773 master.go:461] Enabling API group "networking.k8s.io".
I0915 13:40:02.908779  108773 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.909057  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.909082  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.908460  108773 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0915 13:40:02.911212  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.912292  108773 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0915 13:40:02.912324  108773 master.go:461] Enabling API group "node.k8s.io".
I0915 13:40:02.912524  108773 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0915 13:40:02.912552  108773 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.913105  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.913125  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.914251  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.914665  108773 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0915 13:40:02.914859  108773 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.914989  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.915013  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.915036  108773 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0915 13:40:02.915936  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.918253  108773 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0915 13:40:02.918285  108773 master.go:461] Enabling API group "policy".
I0915 13:40:02.918490  108773 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0915 13:40:02.919562  108773 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.919818  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.919859  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.919606  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.921426  108773 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0915 13:40:02.921633  108773 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.921830  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.921855  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.921946  108773 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0915 13:40:02.922930  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.923203  108773 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0915 13:40:02.923242  108773 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.923260  108773 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0915 13:40:02.923633  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.923655  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.925296  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.927158  108773 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0915 13:40:02.927567  108773 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0915 13:40:02.927901  108773 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.928155  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.928242  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.928681  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.929985  108773 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0915 13:40:02.930045  108773 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.930183  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.930203  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.930305  108773 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0915 13:40:02.932256  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.932516  108773 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0915 13:40:02.932548  108773 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0915 13:40:02.932855  108773 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.933134  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.933191  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.934813  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.934829  108773 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0915 13:40:02.934876  108773 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.935033  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.935054  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.935120  108773 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0915 13:40:02.936254  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.936551  108773 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0915 13:40:02.936580  108773 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0915 13:40:02.936757  108773 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.936876  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.936895  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.938392  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.939548  108773 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0915 13:40:02.939594  108773 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0915 13:40:02.939995  108773 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0915 13:40:02.941688  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.942606  108773 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.942759  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.942864  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.943872  108773 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0915 13:40:02.944055  108773 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0915 13:40:02.944091  108773 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.944279  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.944300  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.945713  108773 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0915 13:40:02.945817  108773 master.go:461] Enabling API group "scheduling.k8s.io".
I0915 13:40:02.946565  108773 master.go:450] Skipping disabled API group "settings.k8s.io".
I0915 13:40:02.945879  108773 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0915 13:40:02.946161  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.947716  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.948199  108773 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.948559  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.948845  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.949882  108773 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0915 13:40:02.950060  108773 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0915 13:40:02.951164  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.952301  108773 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.952560  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.952643  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.953787  108773 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0915 13:40:02.953912  108773 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.954224  108773 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0915 13:40:02.955048  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.955142  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.956531  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.956816  108773 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0915 13:40:02.956851  108773 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0915 13:40:02.956853  108773 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.957008  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.957028  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.957709  108773 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0915 13:40:02.957915  108773 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.957940  108773 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0915 13:40:02.958073  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.958090  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.958650  108773 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0915 13:40:02.959303  108773 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.959434  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.959450  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.959453  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.959607  108773 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0915 13:40:02.960497  108773 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0915 13:40:02.960531  108773 master.go:461] Enabling API group "storage.k8s.io".
I0915 13:40:02.960702  108773 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.960862  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.960882  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.961018  108773 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0915 13:40:02.961206  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.962165  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.962605  108773 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0915 13:40:02.962723  108773 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0915 13:40:02.962960  108773 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.963119  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.963139  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.964813  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.964835  108773 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0915 13:40:02.964815  108773 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0915 13:40:02.965034  108773 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.965189  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.965214  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.966824  108773 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0915 13:40:02.966941  108773 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0915 13:40:02.967007  108773 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.967168  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.967206  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.968379  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.968667  108773 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0915 13:40:02.968731  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.968859  108773 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.968959  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.968976  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.969041  108773 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0915 13:40:02.969839  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.973551  108773 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0915 13:40:02.973574  108773 master.go:461] Enabling API group "apps".
I0915 13:40:02.973621  108773 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.973812  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.973831  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.973919  108773 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0915 13:40:02.976170  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.976874  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.977764  108773 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0915 13:40:02.977805  108773 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.977959  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.977995  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.978090  108773 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0915 13:40:02.979517  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.980418  108773 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0915 13:40:02.980477  108773 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.980613  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.980634  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.980706  108773 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0915 13:40:02.981832  108773 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0915 13:40:02.981871  108773 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.982014  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.982031  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.982046  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.982113  108773 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0915 13:40:02.983560  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.984821  108773 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0915 13:40:02.984841  108773 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0915 13:40:02.984873  108773 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.985216  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:02.985241  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:02.985321  108773 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0915 13:40:02.987708  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.988104  108773 store.go:1342] Monitoring events count at <storage-prefix>//events
I0915 13:40:02.988129  108773 master.go:461] Enabling API group "events.k8s.io".
I0915 13:40:02.988345  108773 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.989275  108773 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0915 13:40:02.989522  108773 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.989928  108773 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.990107  108773 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.990197  108773 watch_cache.go:405] Replace watchCache (rev: 30585) 
I0915 13:40:02.990226  108773 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.990336  108773 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.990646  108773 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.990801  108773 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.990950  108773 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.991030  108773 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.992442  108773 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.993028  108773 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.995204  108773 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.995819  108773 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.997447  108773 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.998162  108773 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:02.999859  108773 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.000602  108773 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.003192  108773 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.004410  108773 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0915 13:40:03.004620  108773 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0915 13:40:03.005860  108773 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.006317  108773 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.007021  108773 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.008552  108773 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.009860  108773 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.012203  108773 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.013421  108773 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.014722  108773 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.016437  108773 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.017417  108773 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.019150  108773 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0915 13:40:03.019570  108773 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0915 13:40:03.022107  108773 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.023059  108773 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.023981  108773 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.025762  108773 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.026592  108773 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.027630  108773 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.028753  108773 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.029670  108773 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.030614  108773 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.033077  108773 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.034297  108773 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0915 13:40:03.034909  108773 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0915 13:40:03.036091  108773 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.037280  108773 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0915 13:40:03.037486  108773 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0915 13:40:03.038299  108773 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.038998  108773 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.039530  108773 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.040513  108773 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.041143  108773 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.041924  108773 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.042808  108773 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0915 13:40:03.042949  108773 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0915 13:40:03.043876  108773 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.046521  108773 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.046880  108773 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.047961  108773 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.048511  108773 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.048966  108773 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.050170  108773 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.050584  108773 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.051124  108773 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.053209  108773 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.053727  108773 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.054816  108773 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0915 13:40:03.055087  108773 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0915 13:40:03.055117  108773 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0915 13:40:03.056443  108773 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.057974  108773 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.058987  108773 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.059690  108773 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.061342  108773 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0c6e10b0-7dd5-456b-a796-7118161b42f1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0915 13:40:03.069198  108773 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.784411ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35106]
I0915 13:40:03.069630  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.069652  108773 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0915 13:40:03.069661  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.069671  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.069688  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.069695  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.069750  108773 httplog.go:90] GET /healthz: (239.584µs) 0 [Go-http-client/1.1 127.0.0.1:35106]
I0915 13:40:03.078450  108773 httplog.go:90] GET /api/v1/services: (1.195462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35106]
I0915 13:40:03.086088  108773 httplog.go:90] GET /api/v1/services: (1.265775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35106]
I0915 13:40:03.088794  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.088823  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.088839  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.088847  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.088855  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.088886  108773 httplog.go:90] GET /healthz: (301.621µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35106]
I0915 13:40:03.090382  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.776219ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.092981  108773 httplog.go:90] GET /api/v1/services: (1.710228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.093189  108773 httplog.go:90] GET /api/v1/services: (2.549393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35106]
I0915 13:40:03.093631  108773 httplog.go:90] POST /api/v1/namespaces: (2.337049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35110]
I0915 13:40:03.095168  108773 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.109059ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.099208  108773 httplog.go:90] POST /api/v1/namespaces: (3.548694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.111672  108773 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.691968ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.114535  108773 httplog.go:90] POST /api/v1/namespaces: (2.405854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.179301  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.179331  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.179343  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.179352  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.179376  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.179407  108773 httplog.go:90] GET /healthz: (244.927µs) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:03.197436  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.197481  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.197494  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.197503  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.197512  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.197556  108773 httplog.go:90] GET /healthz: (270.494µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.280443  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.280478  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.280491  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.280501  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.280518  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.280561  108773 httplog.go:90] GET /healthz: (303.242µs) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:03.289974  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.290022  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.290035  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.290044  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.290053  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.290084  108773 httplog.go:90] GET /healthz: (266.397µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.377525  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.377559  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.377571  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.377581  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.377589  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.377625  108773 httplog.go:90] GET /healthz: (307.866µs) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:03.389676  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.389710  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.389722  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.389731  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.389739  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.389772  108773 httplog.go:90] GET /healthz: (238.804µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.477616  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.477649  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.477662  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.477672  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.477680  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.477719  108773 httplog.go:90] GET /healthz: (247.838µs) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:03.489767  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.489804  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.489828  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.489838  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.489845  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.489895  108773 httplog.go:90] GET /healthz: (270.913µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.577559  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.577597  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.577610  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.577625  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.577637  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.577682  108773 httplog.go:90] GET /healthz: (272.091µs) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:03.589786  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.589821  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.589833  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.589853  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.589861  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.589889  108773 httplog.go:90] GET /healthz: (268.286µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.677637  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.677685  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.677699  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.677709  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.677722  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.677754  108773 httplog.go:90] GET /healthz: (277.921µs) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:03.689680  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.689715  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.689728  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.689737  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.689745  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.689770  108773 httplog.go:90] GET /healthz: (230.44µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.777596  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.777636  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.777649  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.777667  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.777676  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.777713  108773 httplog.go:90] GET /healthz: (370.116µs) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:03.789714  108773 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0915 13:40:03.789756  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.789775  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.789785  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.789792  108773 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.789823  108773 httplog.go:90] GET /healthz: (252.723µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.807247  108773 client.go:361] parsed scheme: "endpoint"
I0915 13:40:03.807342  108773 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0915 13:40:03.878766  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.878800  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.878816  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.878824  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.878868  108773 httplog.go:90] GET /healthz: (1.397395ms) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:03.890781  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.890820  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.890830  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.890838  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.890902  108773 httplog.go:90] GET /healthz: (1.273257ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:03.981777  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.981810  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.981820  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.981829  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.981877  108773 httplog.go:90] GET /healthz: (4.142487ms) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:03.991180  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:03.991207  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:03.991217  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:03.991225  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:03.991262  108773 httplog.go:90] GET /healthz: (952.144µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.070003  108773 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.697682ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.070250  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.80085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35112]
I0915 13:40:04.074666  108773 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.507495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.074914  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.96984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35112]
I0915 13:40:04.075170  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (7.110607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35118]
I0915 13:40:04.075753  108773 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0915 13:40:04.078460  108773 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.567262ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.078692  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.526937ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35112]
I0915 13:40:04.078854  108773 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (2.512201ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35118]
I0915 13:40:04.081576  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.99146ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35112]
I0915 13:40:04.081747  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.081763  108773 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0915 13:40:04.081773  108773 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0915 13:40:04.081781  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0915 13:40:04.081807  108773 httplog.go:90] GET /healthz: (1.859701ms) 0 [Go-http-client/1.1 127.0.0.1:35120]
I0915 13:40:04.081869  108773 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.199605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35118]
I0915 13:40:04.083079  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (875.092µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35112]
I0915 13:40:04.084681  108773 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (5.045023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.086124  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.702476ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35112]
I0915 13:40:04.086316  108773 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0915 13:40:04.086330  108773 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0915 13:40:04.087745  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (822.811µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.090094  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.090128  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.090159  108773 httplog.go:90] GET /healthz: (718.591µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.090973  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.369715ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.092054  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (776.253µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.093281  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (846.196µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.095385  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.707803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.099427  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0915 13:40:04.100790  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.103415ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.103420  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.13907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.103664  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0915 13:40:04.108565  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (4.746382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.111050  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.980025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.111254  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0915 13:40:04.114327  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (2.626174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.117011  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.255837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.117281  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0915 13:40:04.118666  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (906.924µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.120539  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.478405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.120816  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0915 13:40:04.121996  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (961.224µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.129915  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.330202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.130937  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0915 13:40:04.132118  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (942.851µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.137732  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.04774ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.138005  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0915 13:40:04.139267  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.030389ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.143135  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.331939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.143345  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0915 13:40:04.144951  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.02567ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.147727  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.173941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.148206  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0915 13:40:04.149450  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (984.146µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.152249  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.146365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.152614  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0915 13:40:04.153996  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.242584ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.156894  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.348189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.157089  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0915 13:40:04.158440  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.082874ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.161979  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.088148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.162549  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0915 13:40:04.164480  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.381313ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.168543  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.188394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.169053  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0915 13:40:04.172274  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (3.016007ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.175614  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.834296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.175851  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0915 13:40:04.178873  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (2.370765ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.180472  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.180502  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.180536  108773 httplog.go:90] GET /healthz: (1.796345ms) 0 [Go-http-client/1.1 127.0.0.1:35120]
I0915 13:40:04.181090  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.689458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.181403  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0915 13:40:04.183852  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (2.109822ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.185892  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.658457ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.186092  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0915 13:40:04.192063  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.192092  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.192098  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (4.658995ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.192139  108773 httplog.go:90] GET /healthz: (2.61745ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.194239  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.622741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.194665  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0915 13:40:04.196235  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.379646ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.204602  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.923724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.211118  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0915 13:40:04.214943  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (3.451253ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.228256  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.960046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.228632  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0915 13:40:04.238105  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (2.445056ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.249857  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (11.193779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.250228  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0915 13:40:04.255716  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.942279ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.261198  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.940537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.261576  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0915 13:40:04.265216  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (2.230657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.268400  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.709054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.268636  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0915 13:40:04.270065  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.105878ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.272171  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.629071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.272406  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0915 13:40:04.274933  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (2.381448ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.277191  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.851938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.277443  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0915 13:40:04.278872  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.194316ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.281107  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.281135  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.281160  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.922624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.281193  108773 httplog.go:90] GET /healthz: (2.030609ms) 0 [Go-http-client/1.1 127.0.0.1:35120]
I0915 13:40:04.281524  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0915 13:40:04.282540  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (818.013µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.285352  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.445864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.285762  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0915 13:40:04.287121  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.03193ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.290163  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.342405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.290485  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0915 13:40:04.333722  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.333753  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.333801  108773 httplog.go:90] GET /healthz: (43.809685ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.333857  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (43.144603ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.336959  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.370281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.337542  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0915 13:40:04.338973  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.187288ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.341332  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.863807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.341708  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0915 13:40:04.343166  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.271151ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.346611  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.034854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.346854  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0915 13:40:04.350464  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (3.425293ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.354519  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.609199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.354964  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0915 13:40:04.358925  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (3.771384ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.361510  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.058135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.361943  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0915 13:40:04.363091  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (951.397µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.367502  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.016029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.367996  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0915 13:40:04.380176  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (11.961476ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.380453  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.381659  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.381703  108773 httplog.go:90] GET /healthz: (2.067818ms) 0 [Go-http-client/1.1 127.0.0.1:35120]
I0915 13:40:04.387950  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.648454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.388408  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0915 13:40:04.391094  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.391137  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.391177  108773 httplog.go:90] GET /healthz: (1.226434ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.391881  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (3.252514ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.394466  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.06535ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.394867  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0915 13:40:04.396293  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.092273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.399089  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.375334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.399488  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0915 13:40:04.402184  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (2.198827ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.404310  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.578258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.404635  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0915 13:40:04.410493  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (5.404011ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.413809  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.47481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.414248  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0915 13:40:04.416993  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.769418ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.419570  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.816835ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.419776  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0915 13:40:04.420865  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (908.252µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.423597  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.076538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.423794  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0915 13:40:04.428406  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (955.358µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.431493  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.43875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.431774  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0915 13:40:04.432878  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (897.998µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.434804  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.514845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.435057  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0915 13:40:04.436147  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (818.536µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.438330  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.697482ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.438629  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0915 13:40:04.439723  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (895.499µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.442063  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.702235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.442255  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0915 13:40:04.443349  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (760.405µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.445451  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.716093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.445655  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0915 13:40:04.446759  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (973.6µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.448928  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.825015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.449203  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0915 13:40:04.450160  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (758.153µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.452534  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.675339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.452817  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0915 13:40:04.453831  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (844.544µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.455623  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.433752ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.455797  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0915 13:40:04.456714  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (748.247µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.459153  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.040049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.459457  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0915 13:40:04.460509  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (843.32µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.462633  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.714218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.462826  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0915 13:40:04.463873  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (825.468µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.466140  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.730799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.466472  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0915 13:40:04.467609  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (852.332µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.469986  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.904098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.470905  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0915 13:40:04.472026  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (765.482µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.474582  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.966258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.474916  108773 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0915 13:40:04.476067  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (945.819µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.478180  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.581036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.478384  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0915 13:40:04.479342  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (764.867µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.481700  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.966863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.481878  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0915 13:40:04.482974  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (938.659µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.484089  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.484111  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.484145  108773 httplog.go:90] GET /healthz: (5.391543ms) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:04.489473  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.848252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.489842  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0915 13:40:04.490457  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.490490  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.490528  108773 httplog.go:90] GET /healthz: (1.032688ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.509194  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.396418ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.530329  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.581164ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.530888  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0915 13:40:04.549614  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.913124ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.570186  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.44365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.570519  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0915 13:40:04.578527  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.578556  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.578615  108773 httplog.go:90] GET /healthz: (1.220215ms) 0 [Go-http-client/1.1 127.0.0.1:35120]
I0915 13:40:04.589221  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.464346ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.590432  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.590456  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.590492  108773 httplog.go:90] GET /healthz: (872.15µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.610156  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.434033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.610450  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0915 13:40:04.629253  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.521131ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.651559  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.836547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.651855  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0915 13:40:04.668988  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.313138ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
E0915 13:40:04.675892  108773 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:37653/apis/events.k8s.io/v1beta1/namespaces/permit-pluginf3f2b8d6-6d4a-426f-acf5-2c6a5f0f14eb/events: dial tcp 127.0.0.1:37653: connect: connection refused' (may retry after sleeping)
I0915 13:40:04.678414  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.678442  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.678497  108773 httplog.go:90] GET /healthz: (1.189814ms) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:04.689764  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.077408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.690063  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0915 13:40:04.690791  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.690813  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.690852  108773 httplog.go:90] GET /healthz: (1.272135ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.709325  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.658802ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.730396  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.640772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.730871  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0915 13:40:04.749038  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.321358ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.770428  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.561197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.770907  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0915 13:40:04.778535  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.778566  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.778613  108773 httplog.go:90] GET /healthz: (1.19388ms) 0 [Go-http-client/1.1 127.0.0.1:35120]
I0915 13:40:04.789075  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.346989ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.790531  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.790565  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.790613  108773 httplog.go:90] GET /healthz: (970.797µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.810426  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.670586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.810871  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0915 13:40:04.829072  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.363353ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.850496  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.812472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.850778  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0915 13:40:04.870547  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.530087ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.878515  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.878548  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.878595  108773 httplog.go:90] GET /healthz: (1.185036ms) 0 [Go-http-client/1.1 127.0.0.1:35120]
I0915 13:40:04.889880  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.133661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:04.890313  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0915 13:40:04.891193  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.891221  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.891255  108773 httplog.go:90] GET /healthz: (1.266414ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.909253  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.547522ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.931288  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.581764ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.932640  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0915 13:40:04.949008  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.276291ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.970866  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.212959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.971625  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0915 13:40:04.978754  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.978797  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.978862  108773 httplog.go:90] GET /healthz: (1.258597ms) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:04.989264  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.526746ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:04.990528  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:04.990567  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:04.990609  108773 httplog.go:90] GET /healthz: (1.027449ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.011659  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.211887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.012465  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0915 13:40:05.033656  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (5.998506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.051229  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.478371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.051985  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0915 13:40:05.069262  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.540167ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.080557  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.080588  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.080630  108773 httplog.go:90] GET /healthz: (1.290048ms) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:05.089988  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.344315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.090268  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0915 13:40:05.091915  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.091937  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.091980  108773 httplog.go:90] GET /healthz: (1.151564ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.108967  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.286993ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.132263  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.888798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.132573  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0915 13:40:05.149551  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.882604ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.170251  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.559641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.170795  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0915 13:40:05.180405  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.180445  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.180491  108773 httplog.go:90] GET /healthz: (2.604436ms) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:05.190958  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (3.260601ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.191842  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.191874  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.191911  108773 httplog.go:90] GET /healthz: (1.166276ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.210212  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.313855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.210591  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0915 13:40:05.229597  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.873898ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.250441  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.698951ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.250950  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0915 13:40:05.269108  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.352195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.278422  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.278455  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.278514  108773 httplog.go:90] GET /healthz: (1.133297ms) 0 [Go-http-client/1.1 127.0.0.1:35120]
I0915 13:40:05.289595  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.898576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.289814  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0915 13:40:05.290828  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.290855  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.290895  108773 httplog.go:90] GET /healthz: (1.130743ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.309050  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.336443ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.331230  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.997339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.331563  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0915 13:40:05.351030  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.431852ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.370001  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.139028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.370302  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0915 13:40:05.379776  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.379812  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.379857  108773 httplog.go:90] GET /healthz: (1.028009ms) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:05.389398  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.598857ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.390773  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.390804  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.390846  108773 httplog.go:90] GET /healthz: (1.312755ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.410205  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.201064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.410516  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0915 13:40:05.429448  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.618854ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.455288  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.509242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.455885  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0915 13:40:05.478330  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.478415  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.478456  108773 httplog.go:90] GET /healthz: (837.973µs) 0 [Go-http-client/1.1 127.0.0.1:35120]
I0915 13:40:05.486843  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.028353ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.490609  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.712951ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.491160  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0915 13:40:05.491561  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.491582  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.491621  108773 httplog.go:90] GET /healthz: (1.792213ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.509334  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.186332ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.531725  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.597114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.532129  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0915 13:40:05.549025  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.338382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.570331  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.627189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.570835  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0915 13:40:05.578565  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.578598  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.578644  108773 httplog.go:90] GET /healthz: (1.274299ms) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:05.589070  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.360579ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.590707  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.590735  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.590770  108773 httplog.go:90] GET /healthz: (1.309014ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.610251  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.088883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.611127  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0915 13:40:05.631071  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (3.287898ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.651453  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.42832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.651965  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0915 13:40:05.671968  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (2.978002ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.679992  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.680019  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.680061  108773 httplog.go:90] GET /healthz: (2.24762ms) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:05.689919  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.218118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.690176  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0915 13:40:05.690921  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.690952  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.691007  108773 httplog.go:90] GET /healthz: (924.677µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.709356  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.648178ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.733482  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.546247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.733735  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0915 13:40:05.749051  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.275039ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.771434  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.684503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.771713  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0915 13:40:05.778744  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.778771  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.778815  108773 httplog.go:90] GET /healthz: (1.446374ms) 0 [Go-http-client/1.1 127.0.0.1:35120]
I0915 13:40:05.789630  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.938486ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.791170  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.791193  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.791257  108773 httplog.go:90] GET /healthz: (1.776104ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.811269  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.72446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.811575  108773 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0915 13:40:05.829007  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.32968ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.832816  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.120263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.850350  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.623318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.850642  108773 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0915 13:40:05.868959  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.276828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.870587  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.128734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.879012  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.879051  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.879095  108773 httplog.go:90] GET /healthz: (1.786939ms) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:05.890890  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.212685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.890987  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.891020  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.891076  108773 httplog.go:90] GET /healthz: (905.901µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.891127  108773 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0915 13:40:05.908805  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.132644ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.910719  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.480645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.931121  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.988463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.932172  108773 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0915 13:40:05.953067  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (2.446296ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.954924  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.252373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.969993  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.18566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.970384  108773 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0915 13:40:05.979956  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.979989  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.980063  108773 httplog.go:90] GET /healthz: (1.057414ms) 0 [Go-http-client/1.1 127.0.0.1:35120]
I0915 13:40:05.989195  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.486657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:05.991307  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:05.991338  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:05.991389  108773 httplog.go:90] GET /healthz: (1.597011ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:05.991796  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.116559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.010251  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.531058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.010587  108773 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0915 13:40:06.029115  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.376606ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.031779  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.296368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.051229  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.465276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.051526  108773 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0915 13:40:06.071041  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (3.356453ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.072976  108773 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.204971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.078506  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:06.078532  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:06.078567  108773 httplog.go:90] GET /healthz: (1.26002ms) 0 [Go-http-client/1.1 127.0.0.1:35108]
I0915 13:40:06.091031  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.271133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.091498  108773 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0915 13:40:06.092474  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:06.092508  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:06.092554  108773 httplog.go:90] GET /healthz: (2.941298ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.109649  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.819554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.111945  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.448149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.131668  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.047396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.131936  108773 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0915 13:40:06.149312  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.569804ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.151021  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.220264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.171377  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.55351ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.171644  108773 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0915 13:40:06.178347  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:06.178388  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:06.178431  108773 httplog.go:90] GET /healthz: (1.08787ms) 0 [Go-http-client/1.1 127.0.0.1:35120]
I0915 13:40:06.189130  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.377092ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.191908  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:06.191935  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:06.191974  108773 httplog.go:90] GET /healthz: (1.833523ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.192084  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.502287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.210509  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.778562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.212036  108773 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0915 13:40:06.229006  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.304204ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.232460  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.366103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.252236  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.461766ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.253111  108773 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0915 13:40:06.271732  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (4.040692ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.273667  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.456806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.278821  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:06.278846  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:06.278884  108773 httplog.go:90] GET /healthz: (1.384377ms) 0 [Go-http-client/1.1 127.0.0.1:35120]
I0915 13:40:06.291475  108773 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0915 13:40:06.291500  108773 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0915 13:40:06.291540  108773 httplog.go:90] GET /healthz: (1.640957ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.297220  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (8.32076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.297680  108773 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0915 13:40:06.309182  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.403281ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.311582  108773 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.869399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.330173  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.493714ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.330472  108773 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0915 13:40:06.349949  108773 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (2.184153ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.351632  108773 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.211184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.370645  108773 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.949038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.370920  108773 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0915 13:40:06.379410  108773 httplog.go:90] GET /healthz: (1.099044ms) 200 [Go-http-client/1.1 127.0.0.1:35120]
W0915 13:40:06.380148  108773 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0915 13:40:06.380192  108773 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0915 13:40:06.380203  108773 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0915 13:40:06.380234  108773 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0915 13:40:06.380630  108773 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0915 13:40:06.380650  108773 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0915 13:40:06.380659  108773 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0915 13:40:06.380673  108773 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0915 13:40:06.380685  108773 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0915 13:40:06.380699  108773 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0915 13:40:06.380748  108773 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0915 13:40:06.380771  108773 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0915 13:40:06.380782  108773 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0915 13:40:06.380978  108773 shared_informer.go:197] Waiting for caches to sync for scheduler
I0915 13:40:06.381218  108773 reflector.go:120] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0915 13:40:06.381234  108773 reflector.go:158] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:231
I0915 13:40:06.382150  108773 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (612.357µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:06.383400  108773 get.go:251] Starting watch for /api/v1/pods, rv=30584 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m17s
I0915 13:40:06.391018  108773 httplog.go:90] GET /healthz: (1.445602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.392958  108773 httplog.go:90] GET /api/v1/namespaces/default: (1.510631ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.395540  108773 httplog.go:90] POST /api/v1/namespaces: (2.094749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.397682  108773 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.786246ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.401686  108773 httplog.go:90] POST /api/v1/namespaces/default/services: (3.612571ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.404802  108773 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.738345ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.407200  108773 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.001725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.481159  108773 shared_informer.go:227] caches populated
I0915 13:40:06.481190  108773 shared_informer.go:204] Caches are synced for scheduler 
I0915 13:40:06.481672  108773 reflector.go:120] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.481683  108773 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.481708  108773 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.481838  108773 reflector.go:120] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.481848  108773 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.482196  108773 reflector.go:120] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.482215  108773 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.482304  108773 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.482320  108773 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.482626  108773 reflector.go:120] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.482664  108773 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.482773  108773 reflector.go:120] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.482789  108773 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.483328  108773 reflector.go:120] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.483346  108773 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.483769  108773 reflector.go:120] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.483785  108773 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.481700  108773 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.484635  108773 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (725.378µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35280]
I0915 13:40:06.485249  108773 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (491.889µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35282]
I0915 13:40:06.485298  108773 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (515.898µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.485733  108773 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (370.553µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0915 13:40:06.486278  108773 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (446.394µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35286]
I0915 13:40:06.487658  108773 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30584 labels= fields= timeout=6m54s
I0915 13:40:06.487826  108773 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (786.72µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:06.488427  108773 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30585 labels= fields= timeout=6m11s
I0915 13:40:06.489201  108773 reflector.go:120] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.489222  108773 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0915 13:40:06.489757  108773 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (438.233µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35288]
I0915 13:40:06.490067  108773 get.go:251] Starting watch for /api/v1/services, rv=30849 labels= fields= timeout=6m12s
I0915 13:40:06.491996  108773 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30584 labels= fields= timeout=6m13s
I0915 13:40:06.492863  108773 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30584 labels= fields= timeout=7m18s
I0915 13:40:06.492872  108773 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (628.67µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35282]
I0915 13:40:06.494155  108773 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30585 labels= fields= timeout=7m7s
I0915 13:40:06.494201  108773 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (6.209636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35280]
I0915 13:40:06.494866  108773 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30585 labels= fields= timeout=8m1s
I0915 13:40:06.495023  108773 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (362.838µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35300]
I0915 13:40:06.495782  108773 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30585 labels= fields= timeout=7m31s
I0915 13:40:06.496708  108773 get.go:251] Starting watch for /api/v1/nodes, rv=30584 labels= fields= timeout=7m18s
I0915 13:40:06.498815  108773 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30585 labels= fields= timeout=7m7s
I0915 13:40:06.581504  108773 shared_informer.go:227] caches populated
I0915 13:40:06.581537  108773 shared_informer.go:227] caches populated
I0915 13:40:06.581544  108773 shared_informer.go:227] caches populated
I0915 13:40:06.581549  108773 shared_informer.go:227] caches populated
I0915 13:40:06.581555  108773 shared_informer.go:227] caches populated
I0915 13:40:06.581561  108773 shared_informer.go:227] caches populated
I0915 13:40:06.581566  108773 shared_informer.go:227] caches populated
I0915 13:40:06.581584  108773 shared_informer.go:227] caches populated
I0915 13:40:06.581590  108773 shared_informer.go:227] caches populated
I0915 13:40:06.581600  108773 shared_informer.go:227] caches populated
I0915 13:40:06.581609  108773 shared_informer.go:227] caches populated
I0915 13:40:06.584789  108773 httplog.go:90] POST /api/v1/nodes: (2.448807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:06.585466  108773 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0915 13:40:06.587785  108773 httplog.go:90] PUT /api/v1/nodes/testnode/status: (2.227388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:06.591731  108773 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods: (3.377885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:06.592090  108773 scheduling_queue.go:830] About to try and schedule pod node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pidpressure-fake-name
I0915 13:40:06.592106  108773 scheduler.go:530] Attempting to schedule pod: node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pidpressure-fake-name
I0915 13:40:06.592236  108773 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pidpressure-fake-name", node "testnode"
I0915 13:40:06.592260  108773 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0915 13:40:06.592303  108773 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0915 13:40:06.595151  108773 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name/binding: (2.370851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:06.595585  108773 scheduler.go:662] pod node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0915 13:40:06.597808  108773 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/events: (1.87791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:06.694000  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.592454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:06.794067  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.634862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:06.895091  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.660183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:06.994355  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.853445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:07.094118  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.655295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:07.195215  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.470717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:07.294857  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.316869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:07.394538  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.99653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:07.486062  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:07.486446  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:07.487341  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:07.490811  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:07.494724  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:07.495714  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:07.502034  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (9.546459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:07.593899  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.49728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:07.710790  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (18.346985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:07.794077  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.503688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:07.895325  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.054221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
E0915 13:40:07.992311  108773 factory.go:590] Error getting pod permit-pluginf3f2b8d6-6d4a-426f-acf5-2c6a5f0f14eb/test-pod for retry: Get http://127.0.0.1:37653/api/v1/namespaces/permit-pluginf3f2b8d6-6d4a-426f-acf5-2c6a5f0f14eb/pods/test-pod: dial tcp 127.0.0.1:37653: connect: connection refused; retrying...
I0915 13:40:07.994115  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.7969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:08.094191  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.747889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:08.195225  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.070943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:08.293921  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.502622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:08.394355  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.974682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:08.486313  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:08.486642  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:08.487589  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:08.491008  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:08.494308  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.773891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:08.494918  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:08.495883  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:08.594117  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.734861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:08.694183  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.75262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:08.794244  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.815589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:08.894391  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.931701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:08.994200  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.698299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:09.094251  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.78803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:09.194556  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.027968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:09.294646  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.060111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:09.394238  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.800787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:09.486499  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:09.486915  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:09.487753  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:09.491150  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:09.494103  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.80126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:09.495075  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:09.496053  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:09.594663  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.210196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:09.694636  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.193363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:09.794379  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.944139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:09.894182  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.771495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:09.994210  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.80437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:10.094263  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.806975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:10.194269  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.805953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:10.294255  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.793174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:10.393975  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.649287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:10.486688  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:10.487074  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:10.487908  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:10.491319  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:10.494552  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.143196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:10.495230  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:10.496220  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:10.594516  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.014813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:10.694687  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.14209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:10.794188  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.778489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:10.894187  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.735216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:10.994611  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.192845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:11.094289  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.829222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:11.194258  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.802412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:11.295173  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.171834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:11.394656  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.159954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:11.486843  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:11.487204  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:11.488072  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:11.491610  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:11.494148  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.801331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:11.495419  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:11.496403  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:11.627057  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.961135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:11.694291  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.798117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:11.794661  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.208042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:11.894168  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.715415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:11.994107  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.694949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:12.094222  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.763025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:12.194695  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.61687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:12.294961  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.447669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:12.394628  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.953866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:12.487038  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:12.487381  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:12.488244  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:12.491814  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:12.495542  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:12.496557  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:12.498728  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (6.401103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:12.594397  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.869421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:12.694759  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.335427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:12.794635  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.161749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:12.894278  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.773107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:12.994643  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.080443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:13.094517  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.080512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:13.194650  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.920968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:13.293867  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.443562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:13.394033  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.576751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:13.487256  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:13.487552  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:13.488441  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:13.491931  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:13.494159  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.7032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:13.495727  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:13.496689  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:13.594323  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.90188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:13.693881  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.504631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:13.794241  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.702248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:13.894329  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.837777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:13.994275  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.842785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:14.094078  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.620574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:14.199321  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (6.881913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:14.294311  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.561689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:14.394436  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.896994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:14.487456  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:14.487697  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:14.488620  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:14.492122  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:14.495173  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.566673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:14.495878  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:14.496829  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:14.594286  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.785773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:14.694444  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.971718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:14.794564  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.042784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:14.894338  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.82933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:14.994297  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.811687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:15.094214  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.745723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:15.205236  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.916334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:15.294385  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.993059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:15.394401  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.902125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:15.487791  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:15.487896  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:15.488788  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:15.492313  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:15.495591  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.459727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:15.496051  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:15.496981  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:15.594234  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.832253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:15.694173  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.759696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:15.794165  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.720336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:15.894538  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.043462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:15.996959  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.10939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:16.094177  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.723689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:16.194566  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.034959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:16.294273  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.848667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:16.393445  108773 httplog.go:90] GET /api/v1/namespaces/default: (1.716648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:16.394182  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.530697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35790]
I0915 13:40:16.395083  108773 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.06029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:16.396826  108773 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.388704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:16.487989  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:16.488046  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:16.488994  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:16.493165  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:16.493950  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.491292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:16.496154  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:16.497100  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:16.594172  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.760129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:16.694524  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.014005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:16.794181  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.763516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
E0915 13:40:16.796286  108773 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:37653/apis/events.k8s.io/v1beta1/namespaces/permit-pluginf3f2b8d6-6d4a-426f-acf5-2c6a5f0f14eb/events: dial tcp 127.0.0.1:37653: connect: connection refused' (may retry after sleeping)
I0915 13:40:16.894010  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.600372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:16.994038  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.599014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:17.094321  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.853369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:17.194232  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.830053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:17.294098  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.643477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:17.394273  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.750612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:17.488203  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:17.488250  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:17.489096  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:17.493405  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:17.494190  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.714746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:17.496300  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:17.497338  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:17.594693  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.201722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:17.694057  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.540803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:17.794478  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.992372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:17.894387  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.82786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:17.994346  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.932269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:18.113767  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (20.88383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:18.193973  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.618268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:18.294156  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.700804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:18.394183  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.816612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:18.488345  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:18.488426  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:18.489242  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:18.493576  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:18.494105  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.703817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:18.496441  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:18.497516  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:18.594842  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.382489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:18.694397  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.908759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:18.794288  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.770863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:18.894763  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.281463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:18.994619  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.098157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:19.094168  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.698093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:19.194271  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.842661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:19.294097  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.6085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:19.394406  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.885037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:19.488534  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:19.488588  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:19.489924  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:19.493711  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:19.494398  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.908344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:19.496618  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:19.497675  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:19.594684  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.195949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:19.694254  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.776708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:19.794452  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.996581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:19.894289  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.78584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:19.994506  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.058006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:20.093999  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.564688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:20.193910  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.495472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:20.294122  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.667727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:20.394389  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.860721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:20.488704  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:20.488772  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:20.490543  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:20.493855  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:20.494179  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.73408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:20.496718  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:20.497847  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:20.594343  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.888689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:20.694214  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.71524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:20.794437  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.876827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:20.894120  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.750598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:20.994320  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.885618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:21.094179  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.791232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:21.194179  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.735038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:21.294345  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.839617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:21.394334  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.843523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:21.488835  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:21.489086  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:21.490652  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:21.494113  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.695791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:21.494632  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:21.496890  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:21.498026  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:21.594132  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.672022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:21.694189  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.683313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:21.794022  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.563614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:21.894033  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.604673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:21.994792  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.304825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:22.094069  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.63059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:22.193827  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.373398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:22.294254  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.613729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:22.394256  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.908152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:22.489057  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:22.489260  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:22.490867  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:22.494210  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.78388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:22.497063  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:22.497535  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:22.498739  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:22.594853  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.248975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:22.694892  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.398997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:22.794142  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.641408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:22.894072  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.589908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:22.994455  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.964175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:23.094285  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.737902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:23.194296  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.824293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:23.294999  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.459146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:23.394584  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.112196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:23.489257  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:23.489422  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:23.491069  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:23.493850  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.485669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:23.497252  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:23.497648  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:23.498903  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:23.594416  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.808538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:23.694268  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.740178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:23.794352  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.84453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:23.894035  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.606625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:23.994335  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.830397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:24.098290  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (5.310416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:24.195637  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.802549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:24.295341  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.903182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:24.394272  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.745447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:24.489417  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:24.489554  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:24.491156  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:24.494749  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.395205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:24.497447  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:24.497888  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:24.499076  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:24.594228  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.767312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:24.693839  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.387218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:24.794294  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.629625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:24.894137  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.612576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:24.994004  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.301541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:25.094271  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.796099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:25.193881  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.349349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:25.294202  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.589321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:25.393925  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.360011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:25.489556  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:25.489803  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:25.491292  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:25.493870  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.446886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:25.497654  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:25.498054  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:25.499236  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:25.594448  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.066048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:25.694291  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.771282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:25.794071  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.572788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:25.894972  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.39422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:25.994217  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.686404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:26.094418  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.864117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:26.195032  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.308581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:26.293795  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.432845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:26.393714  108773 httplog.go:90] GET /api/v1/namespaces/default: (1.840525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:26.395103  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.233266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35790]
I0915 13:40:26.395330  108773 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.149796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:26.396917  108773 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.089811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:26.489766  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:26.489977  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:26.491447  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:26.494045  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.630967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:26.497831  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:26.498207  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:26.499418  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:26.594009  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.544443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:26.694280  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.808313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:26.794549  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.95613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:26.894762  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.230191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:26.993972  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.590031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:27.094272  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.836217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:27.194826  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.782426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:27.294097  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.666882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:27.394069  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.642502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:27.489967  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:27.490193  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:27.491655  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:27.495446  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.961711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:27.498382  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:27.498763  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:27.499681  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:27.594498  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.035561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:27.695714  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.756195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:27.794175  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.702792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:27.894663  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.155459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:27.994787  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.284171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
E0915 13:40:27.994855  108773 event_broadcaster.go:244] Unable to write event: 'Post http://127.0.0.1:37653/apis/events.k8s.io/v1beta1/namespaces/permit-pluginf3f2b8d6-6d4a-426f-acf5-2c6a5f0f14eb/events: dial tcp 127.0.0.1:37653: connect: connection refused' (may retry after sleeping)
I0915 13:40:28.094243  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.748665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:28.194043  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.561928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:28.294089  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.629957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:28.394181  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.713812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:28.490168  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:28.490351  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:28.491873  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:28.494297  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.799328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:28.498627  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:28.498929  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:28.499867  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:28.594182  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.720378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:28.694212  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.705892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:28.794238  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.731818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:28.894797  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.158938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:28.994424  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.903552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:29.094602  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.070811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:29.194405  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.870176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:29.294285  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.749409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:29.394457  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.979971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:29.490408  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:29.490485  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:29.492068  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:29.494070  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.641071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:29.499122  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:29.499442  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:29.500027  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:29.594433  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.798407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:29.694174  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.71176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:29.794278  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.775551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:29.893931  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.49708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:29.998044  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (5.565254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:30.094252  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.789288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:30.194250  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.767377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:30.294585  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.999214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:30.394261  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.751976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:30.490940  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:30.490944  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:30.492186  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:30.494427  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.775435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:30.499435  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:30.499877  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:30.500197  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:30.594347  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.709303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:30.694541  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.76256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:30.794236  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.736585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:30.894687  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.215639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:30.994657  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.125673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:31.094459  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.976245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:31.194220  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.777438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:31.294394  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.824307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:31.394178  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.608193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:31.491238  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:31.491401  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:31.492416  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:31.494685  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.925045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:31.499809  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:31.500108  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:31.500491  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:31.594311  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.832139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:31.694512  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.067662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:31.794612  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.081454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:31.894136  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.638839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:31.994653  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.12146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:32.094628  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.185596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:32.195204  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.632087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:32.294275  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.770217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:32.394476  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.974122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:32.491664  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:32.491724  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:32.492899  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:32.494079  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.728731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:32.499996  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:32.500253  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:32.501289  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:32.594087  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.646667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:32.694076  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.66131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:32.794303  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.872417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:32.894571  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.030051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:32.994582  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.74251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:33.095095  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.725312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:33.194162  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.72912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:33.294253  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.738028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:33.394271  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.728393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:33.491877  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:33.491932  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:33.493622  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:33.494104  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.675286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:33.500207  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:33.500597  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:33.501429  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
E0915 13:40:33.593140  108773 factory.go:590] Error getting pod permit-pluginf3f2b8d6-6d4a-426f-acf5-2c6a5f0f14eb/test-pod for retry: Get http://127.0.0.1:37653/api/v1/namespaces/permit-pluginf3f2b8d6-6d4a-426f-acf5-2c6a5f0f14eb/pods/test-pod: dial tcp 127.0.0.1:37653: connect: connection refused; retrying...
I0915 13:40:33.594164  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.717551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:33.694115  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.705365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:33.794260  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.790691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:33.894259  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.787827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:33.993928  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.47279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:34.094259  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.574033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:34.193666  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.22904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:34.294280  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.839196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:34.394473  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.034255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:34.492063  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:34.492096  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:34.493658  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.327729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:34.494003  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:34.500453  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:34.500793  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:34.501622  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:34.594079  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.665617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:34.694497  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.622961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:34.794331  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.859064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:34.894443  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.075085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:34.994175  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.633378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:35.094325  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.965221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:35.194192  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.79136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:35.294084  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.665343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:35.394224  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (1.730672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:35.492294  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:35.492420  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:35.494354  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:35.495800  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (3.063941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:35.500656  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:35.500987  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:35.501835  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:35.594930  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.356906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:35.695847  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (3.158246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:35.796191  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (3.552403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:35.895432  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.283061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:35.995513  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.939402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.097277  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (4.606994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.196261  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (3.6843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.299257  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (6.498792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.395797  108773 httplog.go:90] GET /api/v1/namespaces/default: (3.145858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.398617  108773 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.161377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.402075  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (9.444668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35790]
I0915 13:40:36.402729  108773 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (3.150773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.492636  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:36.492825  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:36.494583  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:36.496933  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (4.235309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.500943  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:36.501318  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:36.502279  108773 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0915 13:40:36.596715  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (3.124086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.600351  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.56876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.613732  108773 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (12.176267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.618520  108773 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pods/pidpressure-fake-name: (2.032648ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.619309  108773 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30585&timeout=8m1s&timeoutSeconds=481&watch=true: (30.124672869s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35280]
I0915 13:40:36.619754  108773 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30585&timeout=7m7s&timeoutSeconds=427&watch=true: (30.131012231s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35284]
I0915 13:40:36.619794  108773 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30585&timeout=7m31s&timeoutSeconds=451&watch=true: (30.124178514s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35300]
I0915 13:40:36.619913  108773 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30584&timeout=7m18s&timeoutSeconds=438&watch=true: (30.129203117s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35288]
I0915 13:40:36.619943  108773 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30584&timeout=7m18s&timeoutSeconds=438&watch=true: (30.133149662s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35292]
E0915 13:40:36.620097  108773 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0915 13:40:36.620137  108773 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30585&timeout=7m7s&timeoutSeconds=427&watch=true: (30.126269904s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35296]
I0915 13:40:36.620235  108773 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30584&timeout=6m54s&timeoutSeconds=414&watch=true: (30.132972901s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35294]
I0915 13:40:36.620153  108773 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30584&timeoutSeconds=317&watch=true: (30.237098726s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35120]
I0915 13:40:36.620509  108773 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30585&timeout=6m11s&timeoutSeconds=371&watch=true: (30.132345037s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35108]
I0915 13:40:36.620547  108773 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30849&timeout=6m12s&timeoutSeconds=372&watch=true: (30.130680781s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35286]
I0915 13:40:36.620555  108773 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30584&timeout=6m13s&timeoutSeconds=373&watch=true: (30.128845116s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35290]
I0915 13:40:36.629041  108773 httplog.go:90] DELETE /api/v1/nodes: (8.846925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.629689  108773 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0915 13:40:36.634155  108773 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (4.084198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
I0915 13:40:36.638882  108773 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (3.740778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35302]
--- FAIL: TestNodePIDPressure (33.83s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190915-133227.xml

Find node-pid-pressure07a80069-1adf-4a30-a91e-8ba373a718e2/pidpressure-fake-name mentions in log files | View test history on testgrid


Show 2862 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 887 lines ...
W0915 13:27:27.745] I0915 13:27:27.628329   52825 shared_informer.go:197] Waiting for caches to sync for GC
W0915 13:27:27.745] I0915 13:27:27.628731   52825 controllermanager.go:534] Started "cronjob"
W0915 13:27:27.746] I0915 13:27:27.628852   52825 cronjob_controller.go:96] Starting CronJob Manager
W0915 13:27:27.746] I0915 13:27:27.629120   52825 controllermanager.go:534] Started "csrcleaner"
W0915 13:27:27.746] W0915 13:27:27.629134   52825 controllermanager.go:513] "bootstrapsigner" is disabled
W0915 13:27:27.746] I0915 13:27:27.629194   52825 cleaner.go:81] Starting CSR cleaner controller
W0915 13:27:27.747] E0915 13:27:27.629607   52825 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0915 13:27:27.747] W0915 13:27:27.629629   52825 controllermanager.go:526] Skipping "service"
W0915 13:27:27.747] I0915 13:27:27.629920   52825 controllermanager.go:534] Started "clusterrole-aggregation"
W0915 13:27:27.748] I0915 13:27:27.630020   52825 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0915 13:27:27.748] I0915 13:27:27.630054   52825 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator
W0915 13:27:27.748] I0915 13:27:27.630344   52825 controllermanager.go:534] Started "pvc-protection"
W0915 13:27:27.748] I0915 13:27:27.630480   52825 pvc_protection_controller.go:100] Starting PVC protection controller
... skipping 2 lines ...
W0915 13:27:27.749] W0915 13:27:27.630850   52825 controllermanager.go:513] "tokencleaner" is disabled
W0915 13:27:27.749] I0915 13:27:27.630865   52825 core.go:211] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0915 13:27:27.750] W0915 13:27:27.630872   52825 controllermanager.go:526] Skipping "route"
W0915 13:27:27.750] I0915 13:27:27.630990   52825 replica_set.go:182] Starting replicationcontroller controller
W0915 13:27:27.750] I0915 13:27:27.631005   52825 shared_informer.go:197] Waiting for caches to sync for ReplicationController
W0915 13:27:27.750] I0915 13:27:27.631214   52825 node_lifecycle_controller.go:77] Sending events to api server
W0915 13:27:27.751] E0915 13:27:27.631254   52825 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W0915 13:27:27.751] W0915 13:27:27.631267   52825 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W0915 13:27:27.751] W0915 13:27:27.631277   52825 controllermanager.go:526] Skipping "root-ca-cert-publisher"
W0915 13:27:27.961] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0915 13:27:28.040] I0915 13:27:28.040208   52825 controllermanager.go:534] Started "garbagecollector"
W0915 13:27:28.041] W0915 13:27:28.040462   52825 controllermanager.go:513] "endpointslice" is disabled
W0915 13:27:28.043] I0915 13:27:28.043165   52825 garbagecollector.go:130] Starting garbage collector controller
W0915 13:27:28.045] I0915 13:27:28.043319   52825 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0915 13:27:28.048] I0915 13:27:28.047543   52825 graph_builder.go:282] GraphBuilder running
W0915 13:27:28.072] W0915 13:27:28.071682   52825 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0915 13:27:28.103] I0915 13:27:28.103133   52825 shared_informer.go:204] Caches are synced for TTL 
W0915 13:27:28.110] I0915 13:27:28.110199   52825 shared_informer.go:204] Caches are synced for namespace 
W0915 13:27:28.111] I0915 13:27:28.110738   52825 shared_informer.go:204] Caches are synced for daemon sets 
W0915 13:27:28.118] I0915 13:27:28.117981   52825 shared_informer.go:204] Caches are synced for endpoint 
W0915 13:27:28.119] I0915 13:27:28.119144   52825 shared_informer.go:204] Caches are synced for HPA 
W0915 13:27:28.129] I0915 13:27:28.128442   52825 shared_informer.go:204] Caches are synced for GC 
W0915 13:27:28.228] I0915 13:27:28.228183   52825 shared_informer.go:204] Caches are synced for taint 
W0915 13:27:28.229] I0915 13:27:28.228352   52825 node_lifecycle_controller.go:1253] Initializing eviction metric for zone: 
W0915 13:27:28.229] I0915 13:27:28.228406   52825 taint_manager.go:186] Starting NoExecuteTaintManager
W0915 13:27:28.230] I0915 13:27:28.228470   52825 node_lifecycle_controller.go:1103] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0915 13:27:28.230] I0915 13:27:28.228640   52825 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"4c935d76-6f31-4718-8dba-8d97d57ecd82", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W0915 13:27:28.231] I0915 13:27:28.230227   52825 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W0915 13:27:28.244] E0915 13:27:28.243728   52825 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0915 13:27:28.303] I0915 13:27:28.302793   52825 shared_informer.go:204] Caches are synced for certificate-csrapproving 
I0915 13:27:28.404] NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
I0915 13:27:28.404] kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   41s
I0915 13:27:28.405] Recording: run_kubectl_version_tests
I0915 13:27:28.405] Running command: run_kubectl_version_tests
I0915 13:27:28.405] 
... skipping 91 lines ...
I0915 13:27:31.637] +++ working dir: /go/src/k8s.io/kubernetes
I0915 13:27:31.639] +++ command: run_RESTMapper_evaluation_tests
I0915 13:27:31.650] +++ [0915 13:27:31] Creating namespace namespace-1568554051-30676
I0915 13:27:31.722] namespace/namespace-1568554051-30676 created
I0915 13:27:31.790] Context "test" modified.
I0915 13:27:31.797] +++ [0915 13:27:31] Testing RESTMapper
I0915 13:27:31.895] +++ [0915 13:27:31] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0915 13:27:31.912] +++ exit code: 0
I0915 13:27:32.032] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0915 13:27:32.034] bindings                                                                      true         Binding
I0915 13:27:32.037] componentstatuses                 cs                                          false        ComponentStatus
I0915 13:27:32.037] configmaps                        cm                                          true         ConfigMap
I0915 13:27:32.037] endpoints                         ep                                          true         Endpoints
... skipping 616 lines ...
I0915 13:27:50.575] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0915 13:27:50.664] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0915 13:27:50.737] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0915 13:27:50.823] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0915 13:27:50.972] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:27:51.156] (Bpod/env-test-pod created
W0915 13:27:51.257] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0915 13:27:51.257] error: setting 'all' parameter but found a non empty selector. 
W0915 13:27:51.258] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0915 13:27:51.258] I0915 13:27:50.258739   49294 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0915 13:27:51.258] error: min-available and max-unavailable cannot be both specified
I0915 13:27:51.358] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0915 13:27:51.359] Name:         env-test-pod
I0915 13:27:51.359] Namespace:    test-kubectl-describe-pod
I0915 13:27:51.359] Priority:     0
I0915 13:27:51.359] Node:         <none>
I0915 13:27:51.359] Labels:       <none>
... skipping 174 lines ...
I0915 13:28:04.634] (Bpod/valid-pod patched
I0915 13:28:04.725] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0915 13:28:04.796] (Bpod/valid-pod patched
I0915 13:28:04.889] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0915 13:28:05.042] (Bpod/valid-pod patched
I0915 13:28:05.137] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0915 13:28:05.309] (B+++ [0915 13:28:05] "kubectl patch with resourceVersion 497" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0915 13:28:05.545] pod "valid-pod" deleted
I0915 13:28:05.553] pod/valid-pod replaced
I0915 13:28:05.650] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0915 13:28:05.815] (BSuccessful
I0915 13:28:05.815] message:error: --grace-period must have --force specified
I0915 13:28:05.816] has:\-\-grace-period must have \-\-force specified
I0915 13:28:05.969] Successful
I0915 13:28:05.970] message:error: --timeout must have --force specified
I0915 13:28:05.970] has:\-\-timeout must have \-\-force specified
I0915 13:28:06.120] node/node-v1-test created
W0915 13:28:06.221] W0915 13:28:06.120413   52825 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0915 13:28:06.322] node/node-v1-test replaced
I0915 13:28:06.371] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0915 13:28:06.446] (Bnode "node-v1-test" deleted
I0915 13:28:06.536] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0915 13:28:06.806] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0915 13:28:07.744] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 66 lines ...
I0915 13:28:11.681] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:28:11.836] (Bpod/test-pod created
W0915 13:28:11.937] Edit cancelled, no changes made.
W0915 13:28:11.937] Edit cancelled, no changes made.
W0915 13:28:11.938] Edit cancelled, no changes made.
W0915 13:28:11.938] Edit cancelled, no changes made.
W0915 13:28:11.938] error: 'name' already has a value (valid-pod), and --overwrite is false
W0915 13:28:11.938] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0915 13:28:11.938] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0915 13:28:12.039] pod "test-pod" deleted
I0915 13:28:12.042] +++ [0915 13:28:12] Creating namespace namespace-1568554092-28668
I0915 13:28:12.122] namespace/namespace-1568554092-28668 created
I0915 13:28:12.196] Context "test" modified.
... skipping 41 lines ...
I0915 13:28:15.357] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0915 13:28:15.360] +++ working dir: /go/src/k8s.io/kubernetes
I0915 13:28:15.362] +++ command: run_kubectl_create_error_tests
I0915 13:28:15.372] +++ [0915 13:28:15] Creating namespace namespace-1568554095-20377
I0915 13:28:15.442] namespace/namespace-1568554095-20377 created
I0915 13:28:15.510] Context "test" modified.
I0915 13:28:15.516] +++ [0915 13:28:15] Testing kubectl create with error
W0915 13:28:15.617] Error: must specify one of -f and -k
W0915 13:28:15.618] 
W0915 13:28:15.619] Create a resource from a file or from stdin.
W0915 13:28:15.619] 
W0915 13:28:15.619]  JSON and YAML formats are accepted.
W0915 13:28:15.619] 
W0915 13:28:15.620] Examples:
... skipping 41 lines ...
W0915 13:28:15.627] 
W0915 13:28:15.627] Usage:
W0915 13:28:15.627]   kubectl create -f FILENAME [options]
W0915 13:28:15.627] 
W0915 13:28:15.627] Use "kubectl <command> --help" for more information about a given command.
W0915 13:28:15.627] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0915 13:28:15.735] +++ [0915 13:28:15] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0915 13:28:15.835] kubectl convert is DEPRECATED and will be removed in a future version.
W0915 13:28:15.836] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0915 13:28:15.936] +++ exit code: 0
I0915 13:28:15.944] Recording: run_kubectl_apply_tests
I0915 13:28:15.944] Running command: run_kubectl_apply_tests
I0915 13:28:15.966] 
... skipping 16 lines ...
I0915 13:28:17.487] apply.sh:276: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I0915 13:28:17.563] (Bpod "test-pod" deleted
I0915 13:28:17.774] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W0915 13:28:18.043] I0915 13:28:18.043313   49294 client.go:361] parsed scheme: "endpoint"
W0915 13:28:18.044] I0915 13:28:18.043382   49294 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0915 13:28:18.047] I0915 13:28:18.047098   49294 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0915 13:28:18.136] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0915 13:28:18.236] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0915 13:28:18.237] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0915 13:28:18.241] +++ exit code: 0
I0915 13:28:18.272] Recording: run_kubectl_run_tests
I0915 13:28:18.272] Running command: run_kubectl_run_tests
I0915 13:28:18.293] 
... skipping 84 lines ...
I0915 13:28:20.582] Context "test" modified.
I0915 13:28:20.589] +++ [0915 13:28:20] Testing kubectl create filter
I0915 13:28:20.673] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:28:20.824] (Bpod/selector-test-pod created
I0915 13:28:20.918] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0915 13:28:21.001] (BSuccessful
I0915 13:28:21.002] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0915 13:28:21.002] has:pods "selector-test-pod-dont-apply" not found
I0915 13:28:21.074] pod "selector-test-pod" deleted
I0915 13:28:21.093] +++ exit code: 0
I0915 13:28:21.124] Recording: run_kubectl_apply_deployments_tests
I0915 13:28:21.124] Running command: run_kubectl_apply_deployments_tests
I0915 13:28:21.145] 
... skipping 38 lines ...
I0915 13:28:22.872] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:28:22.971] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:28:23.068] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:28:23.235] (Bdeployment.apps/nginx created
I0915 13:28:23.336] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0915 13:28:27.560] (BSuccessful
I0915 13:28:27.561] message:Error from server (Conflict): error when applying patch:
I0915 13:28:27.561] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568554101-10606\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0915 13:28:27.561] to:
I0915 13:28:27.561] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0915 13:28:27.562] Name: "nginx", Namespace: "namespace-1568554101-10606"
I0915 13:28:27.563] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568554101-10606\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-09-15T13:28:23Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1568554101-10606" "resourceVersion":"589" "selfLink":"/apis/apps/v1/namespaces/namespace-1568554101-10606/deployments/nginx" "uid":"2b848d2b-e792-4879-9100-2fe74a339d04"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-09-15T13:28:23Z" "lastUpdateTime":"2019-09-15T13:28:23Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-09-15T13:28:23Z" "lastUpdateTime":"2019-09-15T13:28:23Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0915 13:28:27.564] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0915 13:28:27.564] has:Error from server (Conflict)
W0915 13:28:27.664] I0915 13:28:23.236784   52825 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568554101-10606", Name:"nginx", UID:"2b848d2b-e792-4879-9100-2fe74a339d04", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W0915 13:28:27.665] I0915 13:28:23.240481   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554101-10606", Name:"nginx-8484dd655", UID:"9d8b5ae9-e78c-49cc-aaad-7ecd63104747", APIVersion:"apps/v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-b5897
W0915 13:28:27.665] I0915 13:28:23.242695   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554101-10606", Name:"nginx-8484dd655", UID:"9d8b5ae9-e78c-49cc-aaad-7ecd63104747", APIVersion:"apps/v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-jc4f6
W0915 13:28:27.666] I0915 13:28:23.243662   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554101-10606", Name:"nginx-8484dd655", UID:"9d8b5ae9-e78c-49cc-aaad-7ecd63104747", APIVersion:"apps/v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-2drkr
W0915 13:28:29.806] I0915 13:28:29.805850   52825 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1568554092-15923
I0915 13:28:32.761] deployment.apps/nginx configured
... skipping 146 lines ...
I0915 13:28:39.958] +++ [0915 13:28:39] Creating namespace namespace-1568554119-28914
I0915 13:28:40.033] namespace/namespace-1568554119-28914 created
I0915 13:28:40.102] Context "test" modified.
I0915 13:28:40.108] +++ [0915 13:28:40] Testing kubectl get
I0915 13:28:40.212] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:28:40.300] (BSuccessful
I0915 13:28:40.300] message:Error from server (NotFound): pods "abc" not found
I0915 13:28:40.301] has:pods "abc" not found
I0915 13:28:40.389] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:28:40.473] (BSuccessful
I0915 13:28:40.474] message:Error from server (NotFound): pods "abc" not found
I0915 13:28:40.474] has:pods "abc" not found
I0915 13:28:40.556] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:28:40.634] (BSuccessful
I0915 13:28:40.634] message:{
I0915 13:28:40.635]     "apiVersion": "v1",
I0915 13:28:40.635]     "items": [],
... skipping 23 lines ...
I0915 13:28:40.958] has not:No resources found
I0915 13:28:41.037] Successful
I0915 13:28:41.038] message:NAME
I0915 13:28:41.038] has not:No resources found
I0915 13:28:41.122] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:28:41.216] (BSuccessful
I0915 13:28:41.217] message:error: the server doesn't have a resource type "foobar"
I0915 13:28:41.217] has not:No resources found
I0915 13:28:41.295] Successful
I0915 13:28:41.296] message:No resources found in namespace-1568554119-28914 namespace.
I0915 13:28:41.296] has:No resources found
I0915 13:28:41.376] Successful
I0915 13:28:41.376] message:
I0915 13:28:41.377] has not:No resources found
I0915 13:28:41.459] Successful
I0915 13:28:41.460] message:No resources found in namespace-1568554119-28914 namespace.
I0915 13:28:41.460] has:No resources found
I0915 13:28:41.540] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:28:41.622] (BSuccessful
I0915 13:28:41.622] message:Error from server (NotFound): pods "abc" not found
I0915 13:28:41.622] has:pods "abc" not found
I0915 13:28:41.623] FAIL!
I0915 13:28:41.624] message:Error from server (NotFound): pods "abc" not found
I0915 13:28:41.624] has not:List
I0915 13:28:41.624] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0915 13:28:41.729] Successful
I0915 13:28:41.729] message:I0915 13:28:41.685441   62818 loader.go:375] Config loaded from file:  /tmp/tmp.1LlAg7hWCM/.kube/config
I0915 13:28:41.730] I0915 13:28:41.686827   62818 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
I0915 13:28:41.730] I0915 13:28:41.707622   62818 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
... skipping 660 lines ...
I0915 13:28:47.268] Successful
I0915 13:28:47.269] message:NAME    DATA   AGE
I0915 13:28:47.269] one     0      0s
I0915 13:28:47.269] three   0      0s
I0915 13:28:47.269] two     0      0s
I0915 13:28:47.269] STATUS    REASON          MESSAGE
I0915 13:28:47.269] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0915 13:28:47.269] has not:watch is only supported on individual resources
I0915 13:28:48.370] Successful
I0915 13:28:48.371] message:STATUS    REASON          MESSAGE
I0915 13:28:48.371] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0915 13:28:48.371] has not:watch is only supported on individual resources
I0915 13:28:48.376] +++ [0915 13:28:48] Creating namespace namespace-1568554128-24379
I0915 13:28:48.450] namespace/namespace-1568554128-24379 created
I0915 13:28:48.521] Context "test" modified.
I0915 13:28:48.611] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:28:48.784] (Bpod/valid-pod created
... skipping 56 lines ...
I0915 13:28:48.880] }
I0915 13:28:48.964] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0915 13:28:49.210] (B<no value>Successful
I0915 13:28:49.210] message:valid-pod:
I0915 13:28:49.211] has:valid-pod:
I0915 13:28:49.303] Successful
I0915 13:28:49.303] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0915 13:28:49.303] 	template was:
I0915 13:28:49.304] 		{.missing}
I0915 13:28:49.304] 	object given to jsonpath engine was:
I0915 13:28:49.305] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-09-15T13:28:48Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1568554128-24379", "resourceVersion":"694", "selfLink":"/api/v1/namespaces/namespace-1568554128-24379/pods/valid-pod", "uid":"930c4d5e-59bb-4152-90e6-f83cbe34d9e5"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0915 13:28:49.305] has:missing is not found
I0915 13:28:49.397] Successful
I0915 13:28:49.397] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0915 13:28:49.397] 	template was:
I0915 13:28:49.397] 		{{.missing}}
I0915 13:28:49.397] 	raw data was:
I0915 13:28:49.398] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-09-15T13:28:48Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1568554128-24379","resourceVersion":"694","selfLink":"/api/v1/namespaces/namespace-1568554128-24379/pods/valid-pod","uid":"930c4d5e-59bb-4152-90e6-f83cbe34d9e5"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0915 13:28:49.399] 	object given to template engine was:
I0915 13:28:49.399] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-09-15T13:28:48Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1568554128-24379 resourceVersion:694 selfLink:/api/v1/namespaces/namespace-1568554128-24379/pods/valid-pod uid:930c4d5e-59bb-4152-90e6-f83cbe34d9e5] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0915 13:28:49.399] has:map has no entry for key "missing"
W0915 13:28:49.500] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0915 13:28:50.493] Successful
I0915 13:28:50.493] message:NAME        READY   STATUS    RESTARTS   AGE
I0915 13:28:50.494] valid-pod   0/1     Pending   0          1s
I0915 13:28:50.494] STATUS      REASON          MESSAGE
I0915 13:28:50.494] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0915 13:28:50.494] has:STATUS
I0915 13:28:50.495] Successful
I0915 13:28:50.495] message:NAME        READY   STATUS    RESTARTS   AGE
I0915 13:28:50.495] valid-pod   0/1     Pending   0          1s
I0915 13:28:50.496] STATUS      REASON          MESSAGE
I0915 13:28:50.496] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0915 13:28:50.496] has:valid-pod
I0915 13:28:51.581] Successful
I0915 13:28:51.581] message:pod/valid-pod
I0915 13:28:51.581] has not:STATUS
I0915 13:28:51.583] Successful
I0915 13:28:51.583] message:pod/valid-pod
... skipping 72 lines ...
I0915 13:28:52.682] status:
I0915 13:28:52.682]   phase: Pending
I0915 13:28:52.682]   qosClass: Guaranteed
I0915 13:28:52.682] ---
I0915 13:28:52.682] has:name: valid-pod
I0915 13:28:52.765] Successful
I0915 13:28:52.766] message:Error from server (NotFound): pods "invalid-pod" not found
I0915 13:28:52.766] has:"invalid-pod" not found
I0915 13:28:52.852] pod "valid-pod" deleted
I0915 13:28:52.963] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:28:53.131] (Bpod/redis-master created
I0915 13:28:53.135] pod/valid-pod created
I0915 13:28:53.230] Successful
... skipping 35 lines ...
I0915 13:28:54.368] +++ command: run_kubectl_exec_pod_tests
I0915 13:28:54.379] +++ [0915 13:28:54] Creating namespace namespace-1568554134-26198
I0915 13:28:54.456] namespace/namespace-1568554134-26198 created
I0915 13:28:54.536] Context "test" modified.
I0915 13:28:54.542] +++ [0915 13:28:54] Testing kubectl exec POD COMMAND
I0915 13:28:54.626] Successful
I0915 13:28:54.627] message:Error from server (NotFound): pods "abc" not found
I0915 13:28:54.627] has:pods "abc" not found
I0915 13:28:54.790] pod/test-pod created
I0915 13:28:54.891] Successful
I0915 13:28:54.892] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0915 13:28:54.892] has not:pods "test-pod" not found
I0915 13:28:54.893] Successful
I0915 13:28:54.893] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0915 13:28:54.894] has not:pod or type/name must be specified
I0915 13:28:54.973] pod "test-pod" deleted
I0915 13:28:54.994] +++ exit code: 0
I0915 13:28:55.026] Recording: run_kubectl_exec_resource_name_tests
I0915 13:28:55.027] Running command: run_kubectl_exec_resource_name_tests
I0915 13:28:55.047] 
... skipping 2 lines ...
I0915 13:28:55.054] +++ command: run_kubectl_exec_resource_name_tests
I0915 13:28:55.065] +++ [0915 13:28:55] Creating namespace namespace-1568554135-27292
I0915 13:28:55.143] namespace/namespace-1568554135-27292 created
I0915 13:28:55.223] Context "test" modified.
I0915 13:28:55.229] +++ [0915 13:28:55] Testing kubectl exec TYPE/NAME COMMAND
I0915 13:28:55.330] Successful
I0915 13:28:55.331] message:error: the server doesn't have a resource type "foo"
I0915 13:28:55.332] has:error:
I0915 13:28:55.417] Successful
I0915 13:28:55.418] message:Error from server (NotFound): deployments.apps "bar" not found
I0915 13:28:55.418] has:"bar" not found
I0915 13:28:55.586] pod/test-pod created
I0915 13:28:55.763] replicaset.apps/frontend created
W0915 13:28:55.864] I0915 13:28:55.766967   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554135-27292", Name:"frontend", UID:"f9209e44-d54c-4c98-9a28-faec01f58427", APIVersion:"apps/v1", ResourceVersion:"747", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tzc9p
W0915 13:28:55.865] I0915 13:28:55.770239   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554135-27292", Name:"frontend", UID:"f9209e44-d54c-4c98-9a28-faec01f58427", APIVersion:"apps/v1", ResourceVersion:"747", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cgrjf
W0915 13:28:55.865] I0915 13:28:55.770276   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554135-27292", Name:"frontend", UID:"f9209e44-d54c-4c98-9a28-faec01f58427", APIVersion:"apps/v1", ResourceVersion:"747", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-wknl5
I0915 13:28:55.966] configmap/test-set-env-config created
I0915 13:28:56.021] Successful
I0915 13:28:56.022] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0915 13:28:56.022] has:not implemented
I0915 13:28:56.114] Successful
I0915 13:28:56.115] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0915 13:28:56.116] has not:not found
I0915 13:28:56.116] Successful
I0915 13:28:56.117] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0915 13:28:56.117] has not:pod or type/name must be specified
I0915 13:28:56.216] Successful
I0915 13:28:56.217] message:Error from server (BadRequest): pod frontend-cgrjf does not have a host assigned
I0915 13:28:56.217] has not:not found
I0915 13:28:56.218] Successful
I0915 13:28:56.218] message:Error from server (BadRequest): pod frontend-cgrjf does not have a host assigned
I0915 13:28:56.219] has not:pod or type/name must be specified
I0915 13:28:56.293] pod "test-pod" deleted
I0915 13:28:56.373] replicaset.apps "frontend" deleted
I0915 13:28:56.460] configmap "test-set-env-config" deleted
I0915 13:28:56.479] +++ exit code: 0
I0915 13:28:56.512] Recording: run_create_secret_tests
I0915 13:28:56.512] Running command: run_create_secret_tests
I0915 13:28:56.535] 
I0915 13:28:56.537] +++ Running case: test-cmd.run_create_secret_tests 
I0915 13:28:56.540] +++ working dir: /go/src/k8s.io/kubernetes
I0915 13:28:56.543] +++ command: run_create_secret_tests
I0915 13:28:56.633] Successful
I0915 13:28:56.633] message:Error from server (NotFound): secrets "mysecret" not found
I0915 13:28:56.634] has:secrets "mysecret" not found
I0915 13:28:56.783] Successful
I0915 13:28:56.784] message:Error from server (NotFound): secrets "mysecret" not found
I0915 13:28:56.784] has:secrets "mysecret" not found
I0915 13:28:56.785] Successful
I0915 13:28:56.786] message:user-specified
I0915 13:28:56.786] has:user-specified
I0915 13:28:56.855] Successful
I0915 13:28:56.928] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"1273c8b9-bbea-4dc9-b487-f956958092ae","resourceVersion":"767","creationTimestamp":"2019-09-15T13:28:56Z"}}
... skipping 2 lines ...
I0915 13:28:57.094] has:uid
I0915 13:28:57.169] Successful
I0915 13:28:57.170] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"1273c8b9-bbea-4dc9-b487-f956958092ae","resourceVersion":"768","creationTimestamp":"2019-09-15T13:28:56Z"},"data":{"key1":"config1"}}
I0915 13:28:57.170] has:config1
I0915 13:28:57.238] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"1273c8b9-bbea-4dc9-b487-f956958092ae"}}
I0915 13:28:57.326] Successful
I0915 13:28:57.326] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0915 13:28:57.327] has:configmaps "tester-update-cm" not found
I0915 13:28:57.340] +++ exit code: 0
I0915 13:28:57.373] Recording: run_kubectl_create_kustomization_directory_tests
I0915 13:28:57.373] Running command: run_kubectl_create_kustomization_directory_tests
I0915 13:28:57.394] 
I0915 13:28:57.396] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
W0915 13:28:59.993] I0915 13:28:57.840295   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554135-27292", Name:"test-the-deployment-69fdbb5f7d", UID:"c0d9d8f5-f7df-4c58-bc47-a5a99ae456c1", APIVersion:"apps/v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-xxn9j
W0915 13:28:59.994] I0915 13:28:57.840738   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554135-27292", Name:"test-the-deployment-69fdbb5f7d", UID:"c0d9d8f5-f7df-4c58-bc47-a5a99ae456c1", APIVersion:"apps/v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-p5nv6
I0915 13:29:00.975] Successful
I0915 13:29:00.976] message:NAME        READY   STATUS    RESTARTS   AGE
I0915 13:29:00.976] valid-pod   0/1     Pending   0          0s
I0915 13:29:00.976] STATUS      REASON          MESSAGE
I0915 13:29:00.976] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0915 13:29:00.976] has:Timeout exceeded while reading body
I0915 13:29:01.057] Successful
I0915 13:29:01.058] message:NAME        READY   STATUS    RESTARTS   AGE
I0915 13:29:01.058] valid-pod   0/1     Pending   0          2s
I0915 13:29:01.058] has:valid-pod
I0915 13:29:01.136] Successful
I0915 13:29:01.137] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0915 13:29:01.137] has:Invalid timeout value
I0915 13:29:01.222] pod "valid-pod" deleted
I0915 13:29:01.241] +++ exit code: 0
I0915 13:29:01.274] Recording: run_crd_tests
I0915 13:29:01.275] Running command: run_crd_tests
I0915 13:29:01.297] 
... skipping 158 lines ...
I0915 13:29:06.434] foo.company.com/test patched
I0915 13:29:06.529] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0915 13:29:06.616] (Bfoo.company.com/test patched
I0915 13:29:06.717] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0915 13:29:06.795] (Bfoo.company.com/test patched
I0915 13:29:06.884] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0915 13:29:07.035] (B+++ [0915 13:29:07] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0915 13:29:07.098] {
I0915 13:29:07.099]     "apiVersion": "company.com/v1",
I0915 13:29:07.099]     "kind": "Foo",
I0915 13:29:07.099]     "metadata": {
I0915 13:29:07.099]         "annotations": {
I0915 13:29:07.100]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 191 lines ...
I0915 13:29:36.775] bar.company.com/test created
I0915 13:29:36.874] crd.sh:455: Successful get bars {{len .items}}: 1
I0915 13:29:36.949] (Bnamespace "non-native-resources" deleted
I0915 13:29:42.150] crd.sh:458: Successful get bars {{len .items}}: 0
I0915 13:29:42.326] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I0915 13:29:42.425] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
W0915 13:29:42.525] Error from server (NotFound): namespaces "non-native-resources" not found
I0915 13:29:42.626] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0915 13:29:42.652] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0915 13:29:42.680] +++ exit code: 0
I0915 13:29:42.716] Recording: run_cmd_with_img_tests
I0915 13:29:42.717] Running command: run_cmd_with_img_tests
I0915 13:29:42.741] 
... skipping 6 lines ...
I0915 13:29:42.930] +++ [0915 13:29:42] Testing cmd with image
I0915 13:29:43.024] Successful
I0915 13:29:43.024] message:deployment.apps/test1 created
I0915 13:29:43.024] has:deployment.apps/test1 created
I0915 13:29:43.110] deployment.apps "test1" deleted
I0915 13:29:43.186] Successful
I0915 13:29:43.187] message:error: Invalid image name "InvalidImageName": invalid reference format
I0915 13:29:43.187] has:error: Invalid image name "InvalidImageName": invalid reference format
I0915 13:29:43.199] +++ exit code: 0
I0915 13:29:43.233] +++ [0915 13:29:43] Testing recursive resources
I0915 13:29:43.238] +++ [0915 13:29:43] Creating namespace namespace-1568554183-2480
I0915 13:29:43.309] namespace/namespace-1568554183-2480 created
I0915 13:29:43.380] Context "test" modified.
I0915 13:29:43.470] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:29:43.746] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:43.748] (BSuccessful
I0915 13:29:43.749] message:pod/busybox0 created
I0915 13:29:43.749] pod/busybox1 created
I0915 13:29:43.749] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0915 13:29:43.750] has:error validating data: kind not set
I0915 13:29:43.837] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:44.003] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0915 13:29:44.006] (BSuccessful
I0915 13:29:44.006] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0915 13:29:44.006] has:Object 'Kind' is missing
I0915 13:29:44.094] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:44.382] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0915 13:29:44.385] (BSuccessful
I0915 13:29:44.385] message:pod/busybox0 replaced
I0915 13:29:44.385] pod/busybox1 replaced
I0915 13:29:44.386] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0915 13:29:44.386] has:error validating data: kind not set
I0915 13:29:44.474] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:44.568] (BSuccessful
I0915 13:29:44.568] message:Name:         busybox0
I0915 13:29:44.568] Namespace:    namespace-1568554183-2480
I0915 13:29:44.568] Priority:     0
I0915 13:29:44.568] Node:         <none>
... skipping 159 lines ...
I0915 13:29:44.598] has:Object 'Kind' is missing
I0915 13:29:44.657] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:44.843] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0915 13:29:44.845] (BSuccessful
I0915 13:29:44.846] message:pod/busybox0 annotated
I0915 13:29:44.846] pod/busybox1 annotated
I0915 13:29:44.846] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0915 13:29:44.846] has:Object 'Kind' is missing
I0915 13:29:44.931] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:45.182] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0915 13:29:45.184] (BSuccessful
I0915 13:29:45.184] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0915 13:29:45.185] pod/busybox0 configured
I0915 13:29:45.185] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0915 13:29:45.185] pod/busybox1 configured
I0915 13:29:45.185] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0915 13:29:45.186] has:error validating data: kind not set
I0915 13:29:45.267] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:29:45.419] (Bdeployment.apps/nginx created
I0915 13:29:45.523] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0915 13:29:45.611] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0915 13:29:45.778] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I0915 13:29:45.781] (BSuccessful
... skipping 42 lines ...
I0915 13:29:45.858] deployment.apps "nginx" deleted
I0915 13:29:45.951] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:46.112] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:46.115] (BSuccessful
I0915 13:29:46.116] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0915 13:29:46.116] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0915 13:29:46.116] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0915 13:29:46.117] has:Object 'Kind' is missing
I0915 13:29:46.199] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:46.282] (BSuccessful
I0915 13:29:46.282] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0915 13:29:46.282] has:busybox0:busybox1:
I0915 13:29:46.284] Successful
I0915 13:29:46.284] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0915 13:29:46.284] has:Object 'Kind' is missing
I0915 13:29:46.368] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:46.457] (Bpod/busybox0 labeled
I0915 13:29:46.458] pod/busybox1 labeled
I0915 13:29:46.458] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0915 13:29:46.544] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0915 13:29:46.546] (BSuccessful
I0915 13:29:46.547] message:pod/busybox0 labeled
I0915 13:29:46.547] pod/busybox1 labeled
I0915 13:29:46.547] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0915 13:29:46.548] has:Object 'Kind' is missing
I0915 13:29:46.633] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:46.713] (Bpod/busybox0 patched
I0915 13:29:46.713] pod/busybox1 patched
I0915 13:29:46.714] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0915 13:29:46.799] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0915 13:29:46.801] (BSuccessful
I0915 13:29:46.801] message:pod/busybox0 patched
I0915 13:29:46.802] pod/busybox1 patched
I0915 13:29:46.802] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0915 13:29:46.802] has:Object 'Kind' is missing
I0915 13:29:46.888] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:47.055] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:29:47.057] (BSuccessful
I0915 13:29:47.057] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0915 13:29:47.058] pod "busybox0" force deleted
I0915 13:29:47.058] pod "busybox1" force deleted
I0915 13:29:47.058] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0915 13:29:47.058] has:Object 'Kind' is missing
I0915 13:29:47.141] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:29:47.302] (Breplicationcontroller/busybox0 created
I0915 13:29:47.308] replicationcontroller/busybox1 created
I0915 13:29:47.403] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:47.491] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:47.575] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0915 13:29:47.666] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0915 13:29:47.842] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0915 13:29:47.927] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0915 13:29:47.930] (BSuccessful
I0915 13:29:47.930] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0915 13:29:47.930] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0915 13:29:47.931] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0915 13:29:47.931] has:Object 'Kind' is missing
I0915 13:29:48.004] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0915 13:29:48.086] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0915 13:29:48.181] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:48.266] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0915 13:29:48.350] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0915 13:29:48.532] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0915 13:29:48.617] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0915 13:29:48.619] (BSuccessful
I0915 13:29:48.620] message:service/busybox0 exposed
I0915 13:29:48.620] service/busybox1 exposed
I0915 13:29:48.621] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0915 13:29:48.621] has:Object 'Kind' is missing
I0915 13:29:48.710] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:48.796] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0915 13:29:48.881] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0915 13:29:49.076] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0915 13:29:49.160] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0915 13:29:49.162] (BSuccessful
I0915 13:29:49.163] message:replicationcontroller/busybox0 scaled
I0915 13:29:49.163] replicationcontroller/busybox1 scaled
I0915 13:29:49.163] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0915 13:29:49.163] has:Object 'Kind' is missing
I0915 13:29:49.251] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:49.424] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:29:49.426] (BSuccessful
I0915 13:29:49.426] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0915 13:29:49.426] replicationcontroller "busybox0" force deleted
I0915 13:29:49.427] replicationcontroller "busybox1" force deleted
I0915 13:29:49.427] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0915 13:29:49.427] has:Object 'Kind' is missing
I0915 13:29:49.515] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:29:49.679] (Bdeployment.apps/nginx1-deployment created
I0915 13:29:49.682] deployment.apps/nginx0-deployment created
W0915 13:29:49.783] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0915 13:29:49.783] I0915 13:29:43.016649   52825 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568554182-32583", Name:"test1", UID:"cc48c787-37a2-4172-ac7a-e934b303ccb0", APIVersion:"apps/v1", ResourceVersion:"927", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-6cdffdb5b8 to 1
W0915 13:29:49.784] I0915 13:29:43.023681   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554182-32583", Name:"test1-6cdffdb5b8", UID:"8e4877f4-2426-40fc-92e1-93bfa480b500", APIVersion:"apps/v1", ResourceVersion:"928", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-gd2s6
W0915 13:29:49.784] W0915 13:29:43.331065   49294 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0915 13:29:49.784] E0915 13:29:43.332852   52825 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.785] W0915 13:29:43.450132   49294 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0915 13:29:49.785] E0915 13:29:43.451311   52825 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.785] W0915 13:29:43.558389   49294 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0915 13:29:49.785] E0915 13:29:43.559914   52825 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.786] W0915 13:29:43.659830   49294 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0915 13:29:49.786] E0915 13:29:43.661267   52825 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.786] E0915 13:29:44.334065   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.786] E0915 13:29:44.452595   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.786] E0915 13:29:44.560944   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.787] E0915 13:29:44.662618   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.787] E0915 13:29:45.335224   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.787] I0915 13:29:45.423971   52825 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568554183-2480", Name:"nginx", UID:"195e8f95-2b42-4343-83a9-2868393b6ea7", APIVersion:"apps/v1", ResourceVersion:"952", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W0915 13:29:49.787] I0915 13:29:45.427705   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554183-2480", Name:"nginx-f87d999f7", UID:"b9c3868e-d994-46ed-9171-ffd9b74f3bc8", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-pwnx6
W0915 13:29:49.788] I0915 13:29:45.430761   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554183-2480", Name:"nginx-f87d999f7", UID:"b9c3868e-d994-46ed-9171-ffd9b74f3bc8", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-bd7ng
W0915 13:29:49.788] I0915 13:29:45.431102   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554183-2480", Name:"nginx-f87d999f7", UID:"b9c3868e-d994-46ed-9171-ffd9b74f3bc8", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-tvdjk
W0915 13:29:49.788] E0915 13:29:45.455579   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.789] E0915 13:29:45.562504   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.789] E0915 13:29:45.663981   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.789] kubectl convert is DEPRECATED and will be removed in a future version.
W0915 13:29:49.789] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0915 13:29:49.789] E0915 13:29:46.336971   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.790] E0915 13:29:46.456737   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.790] E0915 13:29:46.563700   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.790] E0915 13:29:46.665125   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.790] I0915 13:29:47.049629   52825 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0915 13:29:49.790] I0915 13:29:47.306054   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554183-2480", Name:"busybox0", UID:"363c40c9-9a32-4b29-9d98-12d6f4cac49d", APIVersion:"v1", ResourceVersion:"983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-qfb69
W0915 13:29:49.791] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0915 13:29:49.791] I0915 13:29:47.310688   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554183-2480", Name:"busybox1", UID:"08349681-18a8-479e-8440-79d69fa837e5", APIVersion:"v1", ResourceVersion:"985", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-9t5bw
W0915 13:29:49.791] E0915 13:29:47.338201   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.791] E0915 13:29:47.458118   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.792] E0915 13:29:47.564871   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.792] E0915 13:29:47.666325   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.792] E0915 13:29:48.339578   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.792] E0915 13:29:48.459832   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.792] E0915 13:29:48.566198   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.793] E0915 13:29:48.667798   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.793] I0915 13:29:48.971665   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554183-2480", Name:"busybox0", UID:"363c40c9-9a32-4b29-9d98-12d6f4cac49d", APIVersion:"v1", ResourceVersion:"1005", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-wmd4r
W0915 13:29:49.794] I0915 13:29:48.982847   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554183-2480", Name:"busybox1", UID:"08349681-18a8-479e-8440-79d69fa837e5", APIVersion:"v1", ResourceVersion:"1009", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-8gt4p
W0915 13:29:49.794] E0915 13:29:49.341573   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.794] E0915 13:29:49.461035   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.794] E0915 13:29:49.567617   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.794] E0915 13:29:49.669475   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:49.795] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0915 13:29:49.795] I0915 13:29:49.682186   52825 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568554183-2480", Name:"nginx1-deployment", UID:"303b371c-2758-4c87-837d-96f581819b80", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W0915 13:29:49.795] I0915 13:29:49.685898   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554183-2480", Name:"nginx1-deployment-7bdbbfb5cf", UID:"8b2f9e5e-ee0b-414e-8883-73e09603bf5d", APIVersion:"apps/v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-c6ggn
W0915 13:29:49.796] I0915 13:29:49.686196   52825 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568554183-2480", Name:"nginx0-deployment", UID:"c812a862-2069-40a0-885d-f3c534e91f6c", APIVersion:"apps/v1", ResourceVersion:"1028", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W0915 13:29:49.796] I0915 13:29:49.688115   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554183-2480", Name:"nginx0-deployment-57c6bff7f6", UID:"b5140041-94ed-458f-8ff9-2d2775ef4a63", APIVersion:"apps/v1", ResourceVersion:"1032", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-7vkcm
W0915 13:29:49.796] I0915 13:29:49.689787   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554183-2480", Name:"nginx1-deployment-7bdbbfb5cf", UID:"8b2f9e5e-ee0b-414e-8883-73e09603bf5d", APIVersion:"apps/v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-9jksl
W0915 13:29:49.797] I0915 13:29:49.692708   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554183-2480", Name:"nginx0-deployment-57c6bff7f6", UID:"b5140041-94ed-458f-8ff9-2d2775ef4a63", APIVersion:"apps/v1", ResourceVersion:"1032", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-h6rjs
I0915 13:29:49.897] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0915 13:29:49.898] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0915 13:29:50.073] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0915 13:29:50.075] (BSuccessful
I0915 13:29:50.075] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0915 13:29:50.075] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0915 13:29:50.076] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0915 13:29:50.076] has:Object 'Kind' is missing
I0915 13:29:50.162] deployment.apps/nginx1-deployment paused
I0915 13:29:50.165] deployment.apps/nginx0-deployment paused
I0915 13:29:50.263] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0915 13:29:50.266] (BSuccessful
I0915 13:29:50.266] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0915 13:29:50.556] 1         <none>
I0915 13:29:50.556] 
I0915 13:29:50.557] deployment.apps/nginx0-deployment 
I0915 13:29:50.557] REVISION  CHANGE-CAUSE
I0915 13:29:50.557] 1         <none>
I0915 13:29:50.557] 
I0915 13:29:50.558] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0915 13:29:50.558] has:nginx0-deployment
I0915 13:29:50.558] Successful
I0915 13:29:50.559] message:deployment.apps/nginx1-deployment 
I0915 13:29:50.559] REVISION  CHANGE-CAUSE
I0915 13:29:50.559] 1         <none>
I0915 13:29:50.560] 
I0915 13:29:50.560] deployment.apps/nginx0-deployment 
I0915 13:29:50.560] REVISION  CHANGE-CAUSE
I0915 13:29:50.561] 1         <none>
I0915 13:29:50.561] 
I0915 13:29:50.561] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0915 13:29:50.562] has:nginx1-deployment
I0915 13:29:50.562] Successful
I0915 13:29:50.562] message:deployment.apps/nginx1-deployment 
I0915 13:29:50.562] REVISION  CHANGE-CAUSE
I0915 13:29:50.562] 1         <none>
I0915 13:29:50.562] 
I0915 13:29:50.562] deployment.apps/nginx0-deployment 
I0915 13:29:50.562] REVISION  CHANGE-CAUSE
I0915 13:29:50.562] 1         <none>
I0915 13:29:50.562] 
I0915 13:29:50.563] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0915 13:29:50.563] has:Object 'Kind' is missing
I0915 13:29:50.635] deployment.apps "nginx1-deployment" force deleted
I0915 13:29:50.642] deployment.apps "nginx0-deployment" force deleted
W0915 13:29:50.743] E0915 13:29:50.342768   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:50.743] E0915 13:29:50.462413   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:50.743] E0915 13:29:50.568891   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:50.743] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0915 13:29:50.744] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0915 13:29:50.744] E0915 13:29:50.670733   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:51.344] E0915 13:29:51.344086   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:51.464] E0915 13:29:51.463887   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:51.570] E0915 13:29:51.570264   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:51.672] E0915 13:29:51.672024   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:29:51.773] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:29:51.900] (Breplicationcontroller/busybox0 created
I0915 13:29:51.904] replicationcontroller/busybox1 created
W0915 13:29:52.005] I0915 13:29:51.902985   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554183-2480", Name:"busybox0", UID:"bfc06b29-cd70-4485-a58f-871c591ec13f", APIVersion:"v1", ResourceVersion:"1075", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-hw98t
W0915 13:29:52.005] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0915 13:29:52.005] I0915 13:29:51.907202   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554183-2480", Name:"busybox1", UID:"92db52e0-5856-4861-b594-fd6213a7e50a", APIVersion:"v1", ResourceVersion:"1077", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-2cpd9
I0915 13:29:52.106] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0915 13:29:52.106] (BSuccessful
I0915 13:29:52.106] message:no rollbacker has been implemented for "ReplicationController"
I0915 13:29:52.107] no rollbacker has been implemented for "ReplicationController"
I0915 13:29:52.107] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
I0915 13:29:52.107] message:no rollbacker has been implemented for "ReplicationController"
I0915 13:29:52.107] no rollbacker has been implemented for "ReplicationController"
I0915 13:29:52.108] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0915 13:29:52.108] has:Object 'Kind' is missing
I0915 13:29:52.207] Successful
I0915 13:29:52.208] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0915 13:29:52.209] error: replicationcontrollers "busybox0" pausing is not supported
I0915 13:29:52.209] error: replicationcontrollers "busybox1" pausing is not supported
I0915 13:29:52.209] has:Object 'Kind' is missing
I0915 13:29:52.211] Successful
I0915 13:29:52.212] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0915 13:29:52.212] error: replicationcontrollers "busybox0" pausing is not supported
I0915 13:29:52.212] error: replicationcontrollers "busybox1" pausing is not supported
I0915 13:29:52.212] has:replicationcontrollers "busybox0" pausing is not supported
I0915 13:29:52.214] Successful
I0915 13:29:52.215] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0915 13:29:52.215] error: replicationcontrollers "busybox0" pausing is not supported
I0915 13:29:52.215] error: replicationcontrollers "busybox1" pausing is not supported
I0915 13:29:52.215] has:replicationcontrollers "busybox1" pausing is not supported
I0915 13:29:52.311] Successful
I0915 13:29:52.312] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0915 13:29:52.312] error: replicationcontrollers "busybox0" resuming is not supported
I0915 13:29:52.312] error: replicationcontrollers "busybox1" resuming is not supported
I0915 13:29:52.313] has:Object 'Kind' is missing
I0915 13:29:52.316] Successful
I0915 13:29:52.317] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0915 13:29:52.317] error: replicationcontrollers "busybox0" resuming is not supported
I0915 13:29:52.317] error: replicationcontrollers "busybox1" resuming is not supported
I0915 13:29:52.317] has:replicationcontrollers "busybox0" resuming is not supported
I0915 13:29:52.318] Successful
I0915 13:29:52.319] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0915 13:29:52.319] error: replicationcontrollers "busybox0" resuming is not supported
I0915 13:29:52.319] error: replicationcontrollers "busybox1" resuming is not supported
I0915 13:29:52.319] has:replicationcontrollers "busybox0" resuming is not supported
I0915 13:29:52.400] replicationcontroller "busybox0" force deleted
I0915 13:29:52.406] replicationcontroller "busybox1" force deleted
W0915 13:29:52.507] E0915 13:29:52.345745   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:52.507] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0915 13:29:52.508] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0915 13:29:52.508] E0915 13:29:52.465428   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:52.572] E0915 13:29:52.571603   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:52.674] E0915 13:29:52.673479   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:53.347] E0915 13:29:53.347082   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:29:53.448] Recording: run_namespace_tests
I0915 13:29:53.448] Running command: run_namespace_tests
I0915 13:29:53.449] 
I0915 13:29:53.449] +++ Running case: test-cmd.run_namespace_tests 
I0915 13:29:53.449] +++ working dir: /go/src/k8s.io/kubernetes
I0915 13:29:53.449] +++ command: run_namespace_tests
I0915 13:29:53.453] +++ [0915 13:29:53] Testing kubectl(v1:namespaces)
I0915 13:29:53.522] namespace/my-namespace created
I0915 13:29:53.609] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0915 13:29:53.683] (Bnamespace "my-namespace" deleted
W0915 13:29:53.784] E0915 13:29:53.466635   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:53.785] E0915 13:29:53.572874   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:53.785] E0915 13:29:53.675247   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:54.349] E0915 13:29:54.348420   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:54.468] E0915 13:29:54.468271   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:54.575] E0915 13:29:54.574415   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:54.677] E0915 13:29:54.676993   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:55.350] E0915 13:29:55.349827   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:55.470] E0915 13:29:55.469662   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:55.576] E0915 13:29:55.575808   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:55.678] E0915 13:29:55.678222   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:56.351] E0915 13:29:56.351196   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:56.471] E0915 13:29:56.470889   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:56.577] E0915 13:29:56.577027   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:56.680] E0915 13:29:56.679497   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:57.353] E0915 13:29:57.352460   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:57.472] E0915 13:29:57.472239   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:57.578] E0915 13:29:57.578312   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:57.681] E0915 13:29:57.680842   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:58.354] E0915 13:29:58.353763   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:58.474] E0915 13:29:58.473555   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:58.580] E0915 13:29:58.579915   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:29:58.682] E0915 13:29:58.682349   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:29:58.783] namespace/my-namespace condition met
I0915 13:29:58.862] Successful
I0915 13:29:58.863] message:Error from server (NotFound): namespaces "my-namespace" not found
I0915 13:29:58.863] has: not found
I0915 13:29:58.934] namespace/my-namespace created
I0915 13:29:59.023] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0915 13:29:59.209] (BSuccessful
I0915 13:29:59.209] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0915 13:29:59.209] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0915 13:29:59.212] namespace "namespace-1568554138-29294" deleted
I0915 13:29:59.213] namespace "namespace-1568554139-22335" deleted
I0915 13:29:59.213] namespace "namespace-1568554141-25250" deleted
I0915 13:29:59.213] namespace "namespace-1568554142-13821" deleted
I0915 13:29:59.213] namespace "namespace-1568554182-32583" deleted
I0915 13:29:59.213] namespace "namespace-1568554183-2480" deleted
I0915 13:29:59.213] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0915 13:29:59.213] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0915 13:29:59.213] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0915 13:29:59.213] has:warning: deleting cluster-scoped resources
I0915 13:29:59.214] Successful
I0915 13:29:59.214] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0915 13:29:59.214] namespace "kube-node-lease" deleted
I0915 13:29:59.214] namespace "my-namespace" deleted
I0915 13:29:59.214] namespace "namespace-1568554049-11340" deleted
... skipping 27 lines ...
I0915 13:29:59.217] namespace "namespace-1568554138-29294" deleted
I0915 13:29:59.217] namespace "namespace-1568554139-22335" deleted
I0915 13:29:59.217] namespace "namespace-1568554141-25250" deleted
I0915 13:29:59.217] namespace "namespace-1568554142-13821" deleted
I0915 13:29:59.217] namespace "namespace-1568554182-32583" deleted
I0915 13:29:59.217] namespace "namespace-1568554183-2480" deleted
I0915 13:29:59.217] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0915 13:29:59.217] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0915 13:29:59.218] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0915 13:29:59.218] has:namespace "my-namespace" deleted
I0915 13:29:59.313] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0915 13:29:59.382] (Bnamespace/other created
I0915 13:29:59.469] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0915 13:29:59.556] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:29:59.717] (Bpod/valid-pod created
I0915 13:29:59.814] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0915 13:29:59.904] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0915 13:29:59.985] (BSuccessful
I0915 13:29:59.986] message:error: a resource cannot be retrieved by name across all namespaces
I0915 13:29:59.986] has:a resource cannot be retrieved by name across all namespaces
I0915 13:30:00.074] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0915 13:30:00.157] (Bpod "valid-pod" force deleted
I0915 13:30:00.256] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:30:00.336] (Bnamespace "other" deleted
W0915 13:30:00.437] E0915 13:29:59.355065   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:00.437] E0915 13:29:59.474846   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:00.438] E0915 13:29:59.581271   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:00.438] E0915 13:29:59.683692   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:00.438] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0915 13:30:00.438] E0915 13:30:00.356483   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:00.476] E0915 13:30:00.476150   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:00.524] I0915 13:30:00.523558   52825 shared_informer.go:197] Waiting for caches to sync for resource quota
W0915 13:30:00.524] I0915 13:30:00.523625   52825 shared_informer.go:204] Caches are synced for resource quota 
W0915 13:30:00.583] E0915 13:30:00.582794   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:00.685] E0915 13:30:00.685027   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:00.946] I0915 13:30:00.946277   52825 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0915 13:30:00.947] I0915 13:30:00.946355   52825 shared_informer.go:204] Caches are synced for garbage collector 
W0915 13:30:01.358] E0915 13:30:01.357832   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:01.478] E0915 13:30:01.477487   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:01.584] E0915 13:30:01.584197   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:01.687] E0915 13:30:01.686507   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:02.359] E0915 13:30:02.359209   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:02.479] E0915 13:30:02.479015   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:02.586] E0915 13:30:02.585601   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:02.688] E0915 13:30:02.687994   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:02.746] I0915 13:30:02.746192   52825 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1568554183-2480
W0915 13:30:02.750] I0915 13:30:02.749822   52825 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1568554183-2480
W0915 13:30:03.361] E0915 13:30:03.360938   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:03.481] E0915 13:30:03.480515   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:03.587] E0915 13:30:03.587103   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:03.690] E0915 13:30:03.689726   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:04.364] E0915 13:30:04.363902   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:04.483] E0915 13:30:04.483077   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:04.590] E0915 13:30:04.589295   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:04.691] E0915 13:30:04.691240   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:05.365] E0915 13:30:05.364892   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:05.466] +++ exit code: 0
I0915 13:30:05.494] Recording: run_secrets_test
I0915 13:30:05.494] Running command: run_secrets_test
I0915 13:30:05.518] 
I0915 13:30:05.521] +++ Running case: test-cmd.run_secrets_test 
I0915 13:30:05.524] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 57 lines ...
I0915 13:30:07.424] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0915 13:30:07.502] (Bsecret "test-secret" deleted
I0915 13:30:07.585] secret/test-secret created
I0915 13:30:07.677] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0915 13:30:07.771] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0915 13:30:07.850] (Bsecret "test-secret" deleted
W0915 13:30:07.950] E0915 13:30:05.485684   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:07.951] E0915 13:30:05.590867   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:07.951] E0915 13:30:05.692964   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:07.951] I0915 13:30:05.774761   69016 loader.go:375] Config loaded from file:  /tmp/tmp.1LlAg7hWCM/.kube/config
W0915 13:30:07.951] E0915 13:30:06.366211   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:07.952] E0915 13:30:06.487005   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:07.952] E0915 13:30:06.592608   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:07.952] E0915 13:30:06.694181   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:07.952] E0915 13:30:07.367963   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:07.952] E0915 13:30:07.488403   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:07.953] E0915 13:30:07.593855   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:07.953] E0915 13:30:07.695698   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:08.053] secret/secret-string-data created
I0915 13:30:08.120] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0915 13:30:08.212] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0915 13:30:08.297] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0915 13:30:08.373] (Bsecret "secret-string-data" deleted
I0915 13:30:08.472] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:30:08.635] (Bsecret "test-secret" deleted
I0915 13:30:08.722] namespace "test-secrets" deleted
W0915 13:30:08.823] E0915 13:30:08.369501   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:08.823] E0915 13:30:08.489897   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:08.823] E0915 13:30:08.595220   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:08.824] E0915 13:30:08.697189   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:08.869] I0915 13:30:08.868980   52825 namespace_controller.go:171] Namespace has been deleted my-namespace
W0915 13:30:09.296] I0915 13:30:09.296027   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554049-11340
W0915 13:30:09.301] I0915 13:30:09.301297   52825 namespace_controller.go:171] Namespace has been deleted kube-node-lease
W0915 13:30:09.307] I0915 13:30:09.306799   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554051-30676
W0915 13:30:09.307] I0915 13:30:09.306867   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554063-17652
W0915 13:30:09.310] I0915 13:30:09.310257   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554067-27545
W0915 13:30:09.311] I0915 13:30:09.310257   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554067-27287
W0915 13:30:09.312] I0915 13:30:09.312243   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554062-3622
W0915 13:30:09.329] I0915 13:30:09.328844   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554054-32654
W0915 13:30:09.334] I0915 13:30:09.333643   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554066-5170
W0915 13:30:09.360] I0915 13:30:09.359999   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554060-17839
W0915 13:30:09.371] E0915 13:30:09.370987   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:09.492] E0915 13:30:09.491351   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:09.500] I0915 13:30:09.500019   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554089-24450
W0915 13:30:09.502] I0915 13:30:09.502122   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554088-23374
W0915 13:30:09.510] I0915 13:30:09.510191   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554077-5663
W0915 13:30:09.521] I0915 13:30:09.520546   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554091-23617
W0915 13:30:09.524] I0915 13:30:09.524087   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554092-15923
W0915 13:30:09.525] I0915 13:30:09.524135   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554077-17658
W0915 13:30:09.529] I0915 13:30:09.529009   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554092-28668
W0915 13:30:09.542] I0915 13:30:09.542193   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554095-20377
W0915 13:30:09.559] I0915 13:30:09.558957   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554095-1098
W0915 13:30:09.597] E0915 13:30:09.596738   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:09.602] I0915 13:30:09.601621   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554098-5305
W0915 13:30:09.699] E0915 13:30:09.699078   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:09.700] I0915 13:30:09.700013   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554100-21747
W0915 13:30:09.719] I0915 13:30:09.718356   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554118-6018
W0915 13:30:09.743] I0915 13:30:09.742915   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554119-22528
W0915 13:30:09.746] I0915 13:30:09.746048   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554134-26198
W0915 13:30:09.767] I0915 13:30:09.767019   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554119-28914
W0915 13:30:09.779] I0915 13:30:09.778863   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554128-24379
... skipping 3 lines ...
W0915 13:30:09.827] I0915 13:30:09.826283   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554138-29294
W0915 13:30:09.888] I0915 13:30:09.887970   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554139-22335
W0915 13:30:09.902] I0915 13:30:09.901589   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554141-25250
W0915 13:30:09.908] I0915 13:30:09.908261   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554142-13821
W0915 13:30:09.925] I0915 13:30:09.924512   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554182-32583
W0915 13:30:09.968] I0915 13:30:09.967327   52825 namespace_controller.go:171] Namespace has been deleted namespace-1568554183-2480
W0915 13:30:10.373] E0915 13:30:10.372656   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:10.431] I0915 13:30:10.431136   52825 namespace_controller.go:171] Namespace has been deleted other
W0915 13:30:10.493] E0915 13:30:10.492761   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:10.598] E0915 13:30:10.598089   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:10.701] E0915 13:30:10.700517   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:11.374] E0915 13:30:11.374041   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:11.494] E0915 13:30:11.494137   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:11.600] E0915 13:30:11.599547   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:11.702] E0915 13:30:11.701853   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:12.378] E0915 13:30:12.375296   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:12.496] E0915 13:30:12.495667   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:12.601] E0915 13:30:12.600975   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:12.703] E0915 13:30:12.703047   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:13.377] E0915 13:30:13.376837   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:13.497] E0915 13:30:13.497041   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:13.603] E0915 13:30:13.602440   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:13.705] E0915 13:30:13.704522   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:13.833] +++ exit code: 0
I0915 13:30:13.866] Recording: run_configmap_tests
I0915 13:30:13.867] Running command: run_configmap_tests
I0915 13:30:13.891] 
I0915 13:30:13.894] +++ Running case: test-cmd.run_configmap_tests 
I0915 13:30:13.897] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0915 13:30:15.032] configmap/test-binary-configmap created
I0915 13:30:15.124] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0915 13:30:15.210] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0915 13:30:15.444] (Bconfigmap "test-configmap" deleted
I0915 13:30:15.524] configmap "test-binary-configmap" deleted
I0915 13:30:15.601] namespace "test-configmaps" deleted
W0915 13:30:15.702] E0915 13:30:14.378170   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:15.702] E0915 13:30:14.498698   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:15.703] E0915 13:30:14.604014   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:15.703] E0915 13:30:14.705876   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:15.703] E0915 13:30:15.379760   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:15.704] E0915 13:30:15.499918   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:15.704] E0915 13:30:15.605261   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:15.707] E0915 13:30:15.707158   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:16.381] E0915 13:30:16.381027   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:16.501] E0915 13:30:16.501203   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:16.607] E0915 13:30:16.606618   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:16.709] E0915 13:30:16.708758   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:17.383] E0915 13:30:17.382481   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:17.503] E0915 13:30:17.502621   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:17.608] E0915 13:30:17.608079   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:17.711] E0915 13:30:17.710398   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:18.384] E0915 13:30:18.383698   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:18.504] E0915 13:30:18.504005   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:18.610] E0915 13:30:18.609474   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:18.712] E0915 13:30:18.711683   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:18.809] I0915 13:30:18.809347   52825 namespace_controller.go:171] Namespace has been deleted test-secrets
W0915 13:30:19.385] E0915 13:30:19.384805   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:19.505] E0915 13:30:19.505262   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:19.611] E0915 13:30:19.610814   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:19.713] E0915 13:30:19.712910   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:20.386] E0915 13:30:20.386246   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:20.507] E0915 13:30:20.506548   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:20.612] E0915 13:30:20.611531   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:20.712] +++ exit code: 0
I0915 13:30:20.737] Recording: run_client_config_tests
I0915 13:30:20.738] Running command: run_client_config_tests
I0915 13:30:20.759] 
I0915 13:30:20.762] +++ Running case: test-cmd.run_client_config_tests 
I0915 13:30:20.764] +++ working dir: /go/src/k8s.io/kubernetes
I0915 13:30:20.768] +++ command: run_client_config_tests
I0915 13:30:20.779] +++ [0915 13:30:20] Creating namespace namespace-1568554220-4587
I0915 13:30:20.850] namespace/namespace-1568554220-4587 created
I0915 13:30:20.917] Context "test" modified.
I0915 13:30:20.924] +++ [0915 13:30:20] Testing client config
I0915 13:30:20.992] Successful
I0915 13:30:20.993] message:error: stat missing: no such file or directory
I0915 13:30:20.993] has:missing: no such file or directory
I0915 13:30:21.060] Successful
I0915 13:30:21.061] message:error: stat missing: no such file or directory
I0915 13:30:21.061] has:missing: no such file or directory
I0915 13:30:21.130] Successful
I0915 13:30:21.131] message:error: stat missing: no such file or directory
I0915 13:30:21.131] has:missing: no such file or directory
I0915 13:30:21.202] Successful
I0915 13:30:21.203] message:Error in configuration: context was not found for specified context: missing-context
I0915 13:30:21.203] has:context was not found for specified context: missing-context
I0915 13:30:21.280] Successful
I0915 13:30:21.281] message:error: no server found for cluster "missing-cluster"
I0915 13:30:21.281] has:no server found for cluster "missing-cluster"
I0915 13:30:21.349] Successful
I0915 13:30:21.349] message:error: auth info "missing-user" does not exist
I0915 13:30:21.349] has:auth info "missing-user" does not exist
W0915 13:30:21.450] E0915 13:30:20.714052   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:21.450] E0915 13:30:21.387648   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:21.508] E0915 13:30:21.507844   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:21.609] Successful
I0915 13:30:21.609] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0915 13:30:21.610] has:error loading config file
I0915 13:30:21.610] Successful
I0915 13:30:21.610] message:error: stat missing-config: no such file or directory
I0915 13:30:21.610] has:no such file or directory
I0915 13:30:21.611] +++ exit code: 0
I0915 13:30:21.611] Recording: run_service_accounts_tests
I0915 13:30:21.611] Running command: run_service_accounts_tests
I0915 13:30:21.625] 
I0915 13:30:21.628] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0915 13:30:21.962] (Bnamespace/test-service-accounts created
I0915 13:30:22.066] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0915 13:30:22.154] (Bserviceaccount/test-service-account created
I0915 13:30:22.254] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0915 13:30:22.332] (Bserviceaccount "test-service-account" deleted
I0915 13:30:22.420] namespace "test-service-accounts" deleted
W0915 13:30:22.521] E0915 13:30:21.612918   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:22.522] E0915 13:30:21.716701   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:22.522] E0915 13:30:22.388952   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:22.522] E0915 13:30:22.509147   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:22.615] E0915 13:30:22.614876   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:22.718] E0915 13:30:22.718044   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:23.390] E0915 13:30:23.390235   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:23.511] E0915 13:30:23.510973   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:23.617] E0915 13:30:23.616572   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:23.720] E0915 13:30:23.719497   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:24.392] E0915 13:30:24.391539   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:24.513] E0915 13:30:24.512349   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:24.618] E0915 13:30:24.617790   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:24.721] E0915 13:30:24.720793   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:25.393] E0915 13:30:25.392968   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:25.514] E0915 13:30:25.513765   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:25.619] E0915 13:30:25.619045   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:25.681] I0915 13:30:25.680556   52825 namespace_controller.go:171] Namespace has been deleted test-configmaps
W0915 13:30:25.722] E0915 13:30:25.721833   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:26.395] E0915 13:30:26.394669   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:26.517] E0915 13:30:26.516058   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:26.622] E0915 13:30:26.621477   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:26.725] E0915 13:30:26.724432   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:27.396] E0915 13:30:27.395931   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:27.518] E0915 13:30:27.517407   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:27.618] +++ exit code: 0
I0915 13:30:27.619] Recording: run_job_tests
I0915 13:30:27.619] Running command: run_job_tests
I0915 13:30:27.619] 
I0915 13:30:27.620] +++ Running case: test-cmd.run_job_tests 
I0915 13:30:27.620] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0915 13:30:28.329] Labels:                        run=pi
I0915 13:30:28.329] Annotations:                   <none>
I0915 13:30:28.329] Schedule:                      59 23 31 2 *
I0915 13:30:28.329] Concurrency Policy:            Allow
I0915 13:30:28.329] Suspend:                       False
I0915 13:30:28.329] Successful Job History Limit:  3
I0915 13:30:28.329] Failed Job History Limit:      1
I0915 13:30:28.329] Starting Deadline Seconds:     <unset>
I0915 13:30:28.330] Selector:                      <unset>
I0915 13:30:28.330] Parallelism:                   <unset>
I0915 13:30:28.330] Completions:                   <unset>
I0915 13:30:28.330] Pod Template:
I0915 13:30:28.330]   Labels:  run=pi
... skipping 32 lines ...
I0915 13:30:28.837]                 run=pi
I0915 13:30:28.838] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0915 13:30:28.838] Controlled By:  CronJob/pi
I0915 13:30:28.838] Parallelism:    1
I0915 13:30:28.839] Completions:    1
I0915 13:30:28.839] Start Time:     Sun, 15 Sep 2019 13:30:28 +0000
I0915 13:30:28.839] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0915 13:30:28.839] Pod Template:
I0915 13:30:28.840]   Labels:  controller-uid=ed94264d-6f88-4d77-9121-d70727030ce7
I0915 13:30:28.840]            job-name=test-job
I0915 13:30:28.840]            run=pi
I0915 13:30:28.840]   Containers:
I0915 13:30:28.841]    pi:
... skipping 15 lines ...
I0915 13:30:28.844]   Type    Reason            Age   From            Message
I0915 13:30:28.844]   ----    ------            ----  ----            -------
I0915 13:30:28.845]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-hpbtf
I0915 13:30:28.913] job.batch "test-job" deleted
I0915 13:30:28.993] cronjob.batch "pi" deleted
I0915 13:30:29.071] namespace "test-jobs" deleted
W0915 13:30:29.172] E0915 13:30:27.622656   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:29.172] E0915 13:30:27.725511   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:29.173] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0915 13:30:29.173] E0915 13:30:28.398687   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:29.173] E0915 13:30:28.519947   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:29.173] I0915 13:30:28.581897   52825 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"ed94264d-6f88-4d77-9121-d70727030ce7", APIVersion:"batch/v1", ResourceVersion:"1397", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-hpbtf
W0915 13:30:29.174] E0915 13:30:28.624218   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:29.174] E0915 13:30:28.726980   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:29.400] E0915 13:30:29.400034   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:29.521] E0915 13:30:29.521192   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:29.626] E0915 13:30:29.625566   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:29.728] E0915 13:30:29.727988   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:30.402] E0915 13:30:30.401416   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:30.523] E0915 13:30:30.522523   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:30.627] E0915 13:30:30.626594   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:30.729] E0915 13:30:30.728993   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:31.403] E0915 13:30:31.402839   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:31.524] E0915 13:30:31.523857   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:31.628] E0915 13:30:31.628161   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:31.731] E0915 13:30:31.730389   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:32.404] E0915 13:30:32.404030   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:32.501] I0915 13:30:32.500872   52825 namespace_controller.go:171] Namespace has been deleted test-service-accounts
W0915 13:30:32.526] E0915 13:30:32.525406   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:32.630] E0915 13:30:32.629452   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:32.732] E0915 13:30:32.731709   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:33.406] E0915 13:30:33.405521   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:33.527] E0915 13:30:33.526927   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:33.631] E0915 13:30:33.630702   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:33.734] E0915 13:30:33.733337   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:34.188] +++ exit code: 0
I0915 13:30:34.222] Recording: run_create_job_tests
I0915 13:30:34.222] Running command: run_create_job_tests
I0915 13:30:34.245] 
I0915 13:30:34.247] +++ Running case: test-cmd.run_create_job_tests 
I0915 13:30:34.250] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 27 lines ...
I0915 13:30:35.579] +++ [0915 13:30:35] Testing pod templates
I0915 13:30:35.666] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:30:35.823] (Bpodtemplate/nginx created
I0915 13:30:35.920] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0915 13:30:35.991] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0915 13:30:35.991] nginx   nginx        nginx    name=nginx
W0915 13:30:36.092] E0915 13:30:34.406778   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:36.092] I0915 13:30:34.489483   52825 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568554234-28970", Name:"test-job", UID:"41506395-d39a-4618-a109-1f0c53b7475c", APIVersion:"batch/v1", ResourceVersion:"1415", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-zhvfq
W0915 13:30:36.093] E0915 13:30:34.528314   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:36.093] E0915 13:30:34.631901   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:36.093] E0915 13:30:34.734447   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:36.094] I0915 13:30:34.738944   52825 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568554234-28970", Name:"test-job-pi", UID:"66b6379d-6e86-461b-83e8-167648cec14f", APIVersion:"batch/v1", ResourceVersion:"1422", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-f6msj
W0915 13:30:36.094] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0915 13:30:36.094] I0915 13:30:35.080256   52825 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568554234-28970", Name:"my-pi", UID:"86fd6a65-d1f7-427f-a839-ccf94208292e", APIVersion:"batch/v1", ResourceVersion:"1430", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-r9wlv
W0915 13:30:36.095] E0915 13:30:35.408142   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:36.095] E0915 13:30:35.529655   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:36.095] E0915 13:30:35.633164   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:36.095] E0915 13:30:35.735724   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:36.096] I0915 13:30:35.820279   49294 controller.go:606] quota admission added evaluator for: podtemplates
I0915 13:30:36.196] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0915 13:30:36.247] (Bpodtemplate "nginx" deleted
I0915 13:30:36.347] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:30:36.362] (B+++ exit code: 0
I0915 13:30:36.394] Recording: run_service_tests
... skipping 66 lines ...
I0915 13:30:37.247] Port:              <unset>  6379/TCP
I0915 13:30:37.247] TargetPort:        6379/TCP
I0915 13:30:37.248] Endpoints:         <none>
I0915 13:30:37.248] Session Affinity:  None
I0915 13:30:37.248] Events:            <none>
I0915 13:30:37.248] (B
W0915 13:30:37.349] E0915 13:30:36.409490   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:37.349] E0915 13:30:36.530867   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:37.349] E0915 13:30:36.634571   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:37.350] E0915 13:30:36.737719   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:37.411] E0915 13:30:37.411063   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:37.512] Successful describe services:
I0915 13:30:37.513] Name:              kubernetes
I0915 13:30:37.513] Namespace:         default
I0915 13:30:37.513] Labels:            component=apiserver
I0915 13:30:37.513]                    provider=kubernetes
I0915 13:30:37.514] Annotations:       <none>
... skipping 178 lines ...
I0915 13:30:38.300]   selector:
I0915 13:30:38.300]     role: padawan
I0915 13:30:38.300]   sessionAffinity: None
I0915 13:30:38.300]   type: ClusterIP
I0915 13:30:38.300] status:
I0915 13:30:38.300]   loadBalancer: {}
W0915 13:30:38.401] E0915 13:30:37.532031   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:38.401] E0915 13:30:37.635665   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:38.401] E0915 13:30:37.738800   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:38.402] error: you must specify resources by --filename when --local is set.
W0915 13:30:38.402] Example resource specifications include:
W0915 13:30:38.402]    '-f rsrc.yaml'
W0915 13:30:38.402]    '--filename=rsrc.json'
W0915 13:30:38.413] E0915 13:30:38.412462   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:38.513] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0915 13:30:38.616] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0915 13:30:38.693] (Bservice "redis-master" deleted
I0915 13:30:38.787] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0915 13:30:38.870] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0915 13:30:39.029] (Bservice/redis-master created
... skipping 5 lines ...
I0915 13:30:39.733] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0915 13:30:39.811] (Bservice "redis-master" deleted
I0915 13:30:39.893] service "service-v1-test" deleted
I0915 13:30:39.985] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0915 13:30:40.068] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0915 13:30:40.218] (Bservice/redis-master created
W0915 13:30:40.319] E0915 13:30:38.533236   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:40.319] E0915 13:30:38.637205   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:40.320] E0915 13:30:38.740350   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:40.320] I0915 13:30:39.163133   52825 namespace_controller.go:171] Namespace has been deleted test-jobs
W0915 13:30:40.320] E0915 13:30:39.413618   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:40.320] E0915 13:30:39.534474   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:40.321] E0915 13:30:39.638525   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:40.321] E0915 13:30:39.741985   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:40.415] E0915 13:30:40.414859   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:40.515] service/redis-slave created
I0915 13:30:40.516] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0915 13:30:40.561] (BSuccessful
I0915 13:30:40.561] message:NAME           RSRC
I0915 13:30:40.562] kubernetes     145
I0915 13:30:40.562] redis-master   1465
... skipping 29 lines ...
I0915 13:30:42.172] +++ [0915 13:30:42] Creating namespace namespace-1568554242-11877
I0915 13:30:42.246] namespace/namespace-1568554242-11877 created
I0915 13:30:42.320] Context "test" modified.
I0915 13:30:42.330] +++ [0915 13:30:42] Testing kubectl(v1:daemonsets)
I0915 13:30:42.423] apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:30:42.598] (Bdaemonset.apps/bind created
W0915 13:30:42.699] E0915 13:30:40.535755   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:42.699] E0915 13:30:40.639809   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:42.700] E0915 13:30:40.743127   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:42.700] E0915 13:30:41.416563   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:42.700] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0915 13:30:42.701] I0915 13:30:41.497952   52825 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"54f249ee-e6f9-4480-b185-e3aee5070180", APIVersion:"apps/v1", ResourceVersion:"1480", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
W0915 13:30:42.701] I0915 13:30:41.504762   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"38902540-3677-4f82-9490-6a9d8e3a09d0", APIVersion:"apps/v1", ResourceVersion:"1481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-kcwbh
W0915 13:30:42.702] I0915 13:30:41.507820   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"38902540-3677-4f82-9490-6a9d8e3a09d0", APIVersion:"apps/v1", ResourceVersion:"1481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-hh5tc
W0915 13:30:42.702] E0915 13:30:41.537095   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:42.702] E0915 13:30:41.641246   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:42.703] E0915 13:30:41.744322   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:42.703] E0915 13:30:42.417863   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:42.703] E0915 13:30:42.539166   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:42.703] I0915 13:30:42.594444   49294 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0915 13:30:42.704] I0915 13:30:42.604641   49294 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0915 13:30:42.704] E0915 13:30:42.642393   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:42.746] E0915 13:30:42.745892   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:42.847] apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
I0915 13:30:42.887] (Bdaemonset.apps/bind configured
I0915 13:30:42.990] apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
I0915 13:30:43.092] (Bdaemonset.apps/bind image updated
I0915 13:30:43.185] apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
I0915 13:30:43.270] (Bdaemonset.apps/bind env updated
... skipping 43 lines ...
I0915 13:30:45.467] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0915 13:30:45.566] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0915 13:30:45.671] (Bdaemonset.apps/bind rolled back
I0915 13:30:45.769] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0915 13:30:45.858] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0915 13:30:45.959] (BSuccessful
I0915 13:30:45.960] message:error: unable to find specified revision 1000000 in history
I0915 13:30:45.960] has:unable to find specified revision
I0915 13:30:46.047] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0915 13:30:46.133] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0915 13:30:46.229] (Bdaemonset.apps/bind rolled back
I0915 13:30:46.322] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0915 13:30:46.408] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0915 13:30:47.781] Namespace:    namespace-1568554246-28137
I0915 13:30:47.781] Selector:     app=guestbook,tier=frontend
I0915 13:30:47.781] Labels:       app=guestbook
I0915 13:30:47.782]               tier=frontend
I0915 13:30:47.782] Annotations:  <none>
I0915 13:30:47.782] Replicas:     3 current / 3 desired
I0915 13:30:47.782] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0915 13:30:47.782] Pod Template:
I0915 13:30:47.782]   Labels:  app=guestbook
I0915 13:30:47.782]            tier=frontend
I0915 13:30:47.782]   Containers:
I0915 13:30:47.782]    php-redis:
I0915 13:30:47.782]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0915 13:30:47.896] Namespace:    namespace-1568554246-28137
I0915 13:30:47.896] Selector:     app=guestbook,tier=frontend
I0915 13:30:47.896] Labels:       app=guestbook
I0915 13:30:47.896]               tier=frontend
I0915 13:30:47.896] Annotations:  <none>
I0915 13:30:47.896] Replicas:     3 current / 3 desired
I0915 13:30:47.896] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0915 13:30:47.896] Pod Template:
I0915 13:30:47.897]   Labels:  app=guestbook
I0915 13:30:47.897]            tier=frontend
I0915 13:30:47.897]   Containers:
I0915 13:30:47.897]    php-redis:
I0915 13:30:47.897]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0915 13:30:47.898]   Type    Reason            Age   From                    Message
I0915 13:30:47.898]   ----    ------            ----  ----                    -------
I0915 13:30:47.898]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-6st8q
I0915 13:30:47.898]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-9pj5s
I0915 13:30:47.898]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-sst9z
I0915 13:30:47.899] (B
W0915 13:30:47.999] E0915 13:30:43.419301   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:47.999] E0915 13:30:43.540814   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.000] E0915 13:30:43.643587   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.000] E0915 13:30:43.747070   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.000] E0915 13:30:44.420730   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.001] E0915 13:30:44.542105   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.001] E0915 13:30:44.644933   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.001] E0915 13:30:44.748399   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.002] E0915 13:30:45.422193   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.002] E0915 13:30:45.543278   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.002] E0915 13:30:45.646062   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.003] E0915 13:30:45.749677   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.003] E0915 13:30:46.423757   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.003] E0915 13:30:46.544530   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.004] E0915 13:30:46.647263   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.004] E0915 13:30:46.750839   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.004] I0915 13:30:47.076315   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"bb5cc34b-7b66-47cf-b27c-115374c9bc0b", APIVersion:"v1", ResourceVersion:"1559", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5rqvr
W0915 13:30:48.005] I0915 13:30:47.078237   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"bb5cc34b-7b66-47cf-b27c-115374c9bc0b", APIVersion:"v1", ResourceVersion:"1559", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-p72kp
W0915 13:30:48.005] I0915 13:30:47.079769   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"bb5cc34b-7b66-47cf-b27c-115374c9bc0b", APIVersion:"v1", ResourceVersion:"1559", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-glsnw
W0915 13:30:48.006] E0915 13:30:47.425061   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.006] I0915 13:30:47.534644   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"ec1a99f6-741f-4903-abcb-399b4db5c545", APIVersion:"v1", ResourceVersion:"1575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6st8q
W0915 13:30:48.006] I0915 13:30:47.537251   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"ec1a99f6-741f-4903-abcb-399b4db5c545", APIVersion:"v1", ResourceVersion:"1575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9pj5s
W0915 13:30:48.007] I0915 13:30:47.537303   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"ec1a99f6-741f-4903-abcb-399b4db5c545", APIVersion:"v1", ResourceVersion:"1575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-sst9z
W0915 13:30:48.007] E0915 13:30:47.545428   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.007] E0915 13:30:47.650682   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:48.008] E0915 13:30:47.752226   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:48.108] core.sh:1065: Successful describe
I0915 13:30:48.108] Name:         frontend
I0915 13:30:48.109] Namespace:    namespace-1568554246-28137
I0915 13:30:48.109] Selector:     app=guestbook,tier=frontend
I0915 13:30:48.109] Labels:       app=guestbook
I0915 13:30:48.109]               tier=frontend
I0915 13:30:48.109] Annotations:  <none>
I0915 13:30:48.109] Replicas:     3 current / 3 desired
I0915 13:30:48.109] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0915 13:30:48.109] Pod Template:
I0915 13:30:48.109]   Labels:  app=guestbook
I0915 13:30:48.110]            tier=frontend
I0915 13:30:48.110]   Containers:
I0915 13:30:48.110]    php-redis:
I0915 13:30:48.110]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0915 13:30:48.121] Namespace:    namespace-1568554246-28137
I0915 13:30:48.121] Selector:     app=guestbook,tier=frontend
I0915 13:30:48.121] Labels:       app=guestbook
I0915 13:30:48.121]               tier=frontend
I0915 13:30:48.121] Annotations:  <none>
I0915 13:30:48.122] Replicas:     3 current / 3 desired
I0915 13:30:48.122] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0915 13:30:48.122] Pod Template:
I0915 13:30:48.122]   Labels:  app=guestbook
I0915 13:30:48.122]            tier=frontend
I0915 13:30:48.122]   Containers:
I0915 13:30:48.122]    php-redis:
I0915 13:30:48.122]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0915 13:30:48.274] Namespace:    namespace-1568554246-28137
I0915 13:30:48.274] Selector:     app=guestbook,tier=frontend
I0915 13:30:48.274] Labels:       app=guestbook
I0915 13:30:48.274]               tier=frontend
I0915 13:30:48.274] Annotations:  <none>
I0915 13:30:48.274] Replicas:     3 current / 3 desired
I0915 13:30:48.274] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0915 13:30:48.274] Pod Template:
I0915 13:30:48.274]   Labels:  app=guestbook
I0915 13:30:48.275]            tier=frontend
I0915 13:30:48.275]   Containers:
I0915 13:30:48.275]    php-redis:
I0915 13:30:48.275]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0915 13:30:48.389] Namespace:    namespace-1568554246-28137
I0915 13:30:48.389] Selector:     app=guestbook,tier=frontend
I0915 13:30:48.389] Labels:       app=guestbook
I0915 13:30:48.389]               tier=frontend
I0915 13:30:48.389] Annotations:  <none>
I0915 13:30:48.389] Replicas:     3 current / 3 desired
I0915 13:30:48.390] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0915 13:30:48.390] Pod Template:
I0915 13:30:48.390]   Labels:  app=guestbook
I0915 13:30:48.390]            tier=frontend
I0915 13:30:48.390]   Containers:
I0915 13:30:48.390]    php-redis:
I0915 13:30:48.390]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0915 13:30:48.491] Namespace:    namespace-1568554246-28137
I0915 13:30:48.491] Selector:     app=guestbook,tier=frontend
I0915 13:30:48.491] Labels:       app=guestbook
I0915 13:30:48.491]               tier=frontend
I0915 13:30:48.491] Annotations:  <none>
I0915 13:30:48.491] Replicas:     3 current / 3 desired
I0915 13:30:48.491] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0915 13:30:48.492] Pod Template:
I0915 13:30:48.492]   Labels:  app=guestbook
I0915 13:30:48.492]            tier=frontend
I0915 13:30:48.492]   Containers:
I0915 13:30:48.492]    php-redis:
I0915 13:30:48.492]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0915 13:30:48.605] Namespace:    namespace-1568554246-28137
I0915 13:30:48.605] Selector:     app=guestbook,tier=frontend
I0915 13:30:48.605] Labels:       app=guestbook
I0915 13:30:48.606]               tier=frontend
I0915 13:30:48.606] Annotations:  <none>
I0915 13:30:48.606] Replicas:     3 current / 3 desired
I0915 13:30:48.606] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0915 13:30:48.607] Pod Template:
I0915 13:30:48.607]   Labels:  app=guestbook
I0915 13:30:48.607]            tier=frontend
I0915 13:30:48.607]   Containers:
I0915 13:30:48.608]    php-redis:
I0915 13:30:48.608]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 21 lines ...
I0915 13:30:49.277] (Breplicationcontroller/frontend scaled
I0915 13:30:49.376] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I0915 13:30:49.462] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I0915 13:30:49.537] (Breplicationcontroller/frontend scaled
I0915 13:30:49.632] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I0915 13:30:49.707] (Breplicationcontroller "frontend" deleted
W0915 13:30:49.808] E0915 13:30:48.426321   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:49.808] E0915 13:30:48.547106   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:49.809] E0915 13:30:48.652405   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:49.809] E0915 13:30:48.754569   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:49.809] I0915 13:30:48.778898   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"ec1a99f6-741f-4903-abcb-399b4db5c545", APIVersion:"v1", ResourceVersion:"1586", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-6st8q
W0915 13:30:49.809] error: Expected replicas to be 3, was 2
W0915 13:30:49.810] I0915 13:30:49.280169   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"ec1a99f6-741f-4903-abcb-399b4db5c545", APIVersion:"v1", ResourceVersion:"1592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gf697
W0915 13:30:49.810] E0915 13:30:49.427977   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:49.810] I0915 13:30:49.542538   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"ec1a99f6-741f-4903-abcb-399b4db5c545", APIVersion:"v1", ResourceVersion:"1597", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-gf697
W0915 13:30:49.810] E0915 13:30:49.547957   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:49.811] E0915 13:30:49.653706   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:49.811] E0915 13:30:49.756093   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:49.867] I0915 13:30:49.866504   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"redis-master", UID:"290fa680-9f4f-4a76-9834-dae2b33eca63", APIVersion:"v1", ResourceVersion:"1609", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-mw9f7
I0915 13:30:49.968] replicationcontroller/redis-master created
I0915 13:30:50.027] replicationcontroller/redis-slave created
I0915 13:30:50.117] replicationcontroller/redis-master scaled
I0915 13:30:50.120] replicationcontroller/redis-slave scaled
I0915 13:30:50.213] core.sh:1117: Successful get rc redis-master {{.spec.replicas}}: 4
... skipping 4 lines ...
W0915 13:30:50.475] I0915 13:30:50.032819   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"redis-slave", UID:"1f0a19dd-bbd6-4694-8f78-dc2cf8be69b4", APIVersion:"v1", ResourceVersion:"1614", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-sc4cg
W0915 13:30:50.475] I0915 13:30:50.121007   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"redis-master", UID:"290fa680-9f4f-4a76-9834-dae2b33eca63", APIVersion:"v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-s8snm
W0915 13:30:50.476] I0915 13:30:50.123269   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"redis-slave", UID:"1f0a19dd-bbd6-4694-8f78-dc2cf8be69b4", APIVersion:"v1", ResourceVersion:"1623", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-mz4ff
W0915 13:30:50.476] I0915 13:30:50.125092   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"redis-master", UID:"290fa680-9f4f-4a76-9834-dae2b33eca63", APIVersion:"v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-fcq27
W0915 13:30:50.477] I0915 13:30:50.125120   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"redis-slave", UID:"1f0a19dd-bbd6-4694-8f78-dc2cf8be69b4", APIVersion:"v1", ResourceVersion:"1623", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-w9t8r
W0915 13:30:50.477] I0915 13:30:50.125476   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"redis-master", UID:"290fa680-9f4f-4a76-9834-dae2b33eca63", APIVersion:"v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-whnnh
W0915 13:30:50.477] E0915 13:30:50.429448   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:50.546] I0915 13:30:50.546190   52825 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment", UID:"5750ef87-41b3-4c53-a1c9-eb435254a26f", APIVersion:"apps/v1", ResourceVersion:"1656", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0915 13:30:50.549] E0915 13:30:50.549228   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:50.550] I0915 13:30:50.549305   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment-6986c7bc94", UID:"3980df8d-2edf-423e-98de-1213ff764ebe", APIVersion:"apps/v1", ResourceVersion:"1657", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-d6dnf
W0915 13:30:50.552] I0915 13:30:50.551708   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment-6986c7bc94", UID:"3980df8d-2edf-423e-98de-1213ff764ebe", APIVersion:"apps/v1", ResourceVersion:"1657", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-szdvn
W0915 13:30:50.553] I0915 13:30:50.552965   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment-6986c7bc94", UID:"3980df8d-2edf-423e-98de-1213ff764ebe", APIVersion:"apps/v1", ResourceVersion:"1657", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-bx666
W0915 13:30:50.643] I0915 13:30:50.642873   52825 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment", UID:"5750ef87-41b3-4c53-a1c9-eb435254a26f", APIVersion:"apps/v1", ResourceVersion:"1670", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
W0915 13:30:50.648] I0915 13:30:50.647806   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment-6986c7bc94", UID:"3980df8d-2edf-423e-98de-1213ff764ebe", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-bx666
W0915 13:30:50.650] I0915 13:30:50.649529   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment-6986c7bc94", UID:"3980df8d-2edf-423e-98de-1213ff764ebe", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-szdvn
W0915 13:30:50.655] E0915 13:30:50.655341   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:50.756] deployment.apps/nginx-deployment created
I0915 13:30:50.756] deployment.apps/nginx-deployment scaled
I0915 13:30:50.757] core.sh:1127: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
I0915 13:30:50.811] (Bdeployment.apps "nginx-deployment" deleted
I0915 13:30:50.909] Successful
I0915 13:30:50.909] message:service/expose-test-deployment exposed
I0915 13:30:50.909] has:service/expose-test-deployment exposed
I0915 13:30:50.987] service "expose-test-deployment" deleted
I0915 13:30:51.074] Successful
I0915 13:30:51.075] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0915 13:30:51.075] See 'kubectl expose -h' for help and examples
I0915 13:30:51.075] has:invalid deployment: no selectors
W0915 13:30:51.176] E0915 13:30:50.757609   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:51.231] I0915 13:30:51.230385   52825 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment", UID:"24ca8cf7-13c6-4d7e-b80c-b537a53c582b", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0915 13:30:51.234] I0915 13:30:51.234140   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment-6986c7bc94", UID:"ce9dfbeb-82bc-411f-b164-1ac0d25eda60", APIVersion:"apps/v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-pz86z
W0915 13:30:51.238] I0915 13:30:51.237484   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment-6986c7bc94", UID:"ce9dfbeb-82bc-411f-b164-1ac0d25eda60", APIVersion:"apps/v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-bckw9
W0915 13:30:51.239] I0915 13:30:51.237754   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment-6986c7bc94", UID:"ce9dfbeb-82bc-411f-b164-1ac0d25eda60", APIVersion:"apps/v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-tpg9g
I0915 13:30:51.340] deployment.apps/nginx-deployment created
I0915 13:30:51.341] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0915 13:30:51.411] (Bservice/nginx-deployment exposed
I0915 13:30:51.503] core.sh:1150: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
I0915 13:30:51.577] (Bdeployment.apps "nginx-deployment" deleted
I0915 13:30:51.586] service "nginx-deployment" deleted
W0915 13:30:51.687] E0915 13:30:51.431218   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:51.688] E0915 13:30:51.550636   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:51.688] E0915 13:30:51.656682   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:51.749] I0915 13:30:51.748275   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"631c0ea4-08f8-45f6-9494-06ddfd9d8996", APIVersion:"v1", ResourceVersion:"1722", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-v54ps
W0915 13:30:51.752] I0915 13:30:51.751729   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"631c0ea4-08f8-45f6-9494-06ddfd9d8996", APIVersion:"v1", ResourceVersion:"1722", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kb6hr
W0915 13:30:51.752] I0915 13:30:51.751783   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"631c0ea4-08f8-45f6-9494-06ddfd9d8996", APIVersion:"v1", ResourceVersion:"1722", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9jbtc
W0915 13:30:51.759] E0915 13:30:51.758427   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:51.859] replicationcontroller/frontend created
I0915 13:30:51.860] core.sh:1157: Successful get rc frontend {{.spec.replicas}}: 3
I0915 13:30:51.935] (Bservice/frontend exposed
I0915 13:30:52.038] core.sh:1161: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0915 13:30:52.129] (Bservice/frontend-2 exposed
I0915 13:30:52.238] core.sh:1165: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
I0915 13:30:52.401] (Bpod/valid-pod created
W0915 13:30:52.502] E0915 13:30:52.432344   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:52.552] E0915 13:30:52.551845   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:52.652] service/frontend-3 exposed
I0915 13:30:52.653] core.sh:1170: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 444
I0915 13:30:52.702] (Bservice/frontend-4 exposed
I0915 13:30:52.803] core.sh:1174: Successful get service frontend-4 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
I0915 13:30:52.891] (Bservice/frontend-5 exposed
I0915 13:30:52.983] core.sh:1178: Successful get service frontend-5 {{(index .spec.ports 0).port}}: 80
I0915 13:30:53.058] (Bpod "valid-pod" deleted
I0915 13:30:53.145] service "frontend" deleted
I0915 13:30:53.152] service "frontend-2" deleted
I0915 13:30:53.157] service "frontend-3" deleted
I0915 13:30:53.164] service "frontend-4" deleted
I0915 13:30:53.171] service "frontend-5" deleted
I0915 13:30:53.262] Successful
I0915 13:30:53.263] message:error: cannot expose a Node
I0915 13:30:53.263] has:cannot expose
I0915 13:30:53.349] Successful
I0915 13:30:53.349] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0915 13:30:53.350] has:metadata.name: Invalid value
I0915 13:30:53.440] Successful
I0915 13:30:53.440] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 7 lines ...
I0915 13:30:53.861] (Bservice "etcd-server" deleted
I0915 13:30:53.951] core.sh:1215: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0915 13:30:54.025] (Breplicationcontroller "frontend" deleted
I0915 13:30:54.114] core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:30:54.204] (Bcore.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0915 13:30:54.357] (Breplicationcontroller/frontend created
W0915 13:30:54.457] E0915 13:30:52.657951   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:54.458] E0915 13:30:52.759549   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:54.458] E0915 13:30:53.433327   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:54.458] E0915 13:30:53.554122   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:54.459] E0915 13:30:53.659417   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:54.459] E0915 13:30:53.760756   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:54.460] I0915 13:30:54.361149   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"34f48185-b7bb-4c13-9760-93721a91bcab", APIVersion:"v1", ResourceVersion:"1785", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-j6p76
W0915 13:30:54.460] I0915 13:30:54.364083   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"34f48185-b7bb-4c13-9760-93721a91bcab", APIVersion:"v1", ResourceVersion:"1785", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bt5nr
W0915 13:30:54.461] I0915 13:30:54.364659   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"34f48185-b7bb-4c13-9760-93721a91bcab", APIVersion:"v1", ResourceVersion:"1785", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ht76b
W0915 13:30:54.461] E0915 13:30:54.434732   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:54.526] I0915 13:30:54.525575   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"redis-slave", UID:"91707764-d8b0-484d-91f9-a0ce89864527", APIVersion:"v1", ResourceVersion:"1794", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-zdt9q
W0915 13:30:54.529] I0915 13:30:54.529099   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"redis-slave", UID:"91707764-d8b0-484d-91f9-a0ce89864527", APIVersion:"v1", ResourceVersion:"1794", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-9vhdn
W0915 13:30:54.556] E0915 13:30:54.555646   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0915 13:30:54.657] replicationcontroller/redis-slave created
I0915 13:30:54.657] core.sh:1228: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I0915 13:30:54.724] (Bcore.sh:1232: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
I0915 13:30:54.799] (Breplicationcontroller "frontend" deleted
I0915 13:30:54.803] replicationcontroller "redis-slave" deleted
I0915 13:30:54.898] core.sh:1236: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 3 lines ...
I0915 13:30:55.317] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0915 13:30:55.409] core.sh:1246: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0915 13:30:55.485] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0915 13:30:55.566] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0915 13:30:55.656] core.sh:1250: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0915 13:30:55.730] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0915 13:30:55.831] E0915 13:30:54.660519   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:55.831] E0915 13:30:54.762082   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:55.832] I0915 13:30:55.138083   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"9c4297ec-abaa-4ec9-abfc-df65081de50c", APIVersion:"v1", ResourceVersion:"1813", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vvqfw
W0915 13:30:55.832] I0915 13:30:55.140409   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"9c4297ec-abaa-4ec9-abfc-df65081de50c", APIVersion:"v1", ResourceVersion:"1813", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rh24h
W0915 13:30:55.832] I0915 13:30:55.141561   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568554246-28137", Name:"frontend", UID:"9c4297ec-abaa-4ec9-abfc-df65081de50c", APIVersion:"v1", ResourceVersion:"1813", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-z7pdv
W0915 13:30:55.832] E0915 13:30:55.436162   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:55.833] E0915 13:30:55.557089   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:55.833] E0915 13:30:55.662007   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:55.833] E0915 13:30:55.763577   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:55.833] Error: required flag(s) "max" not set
W0915 13:30:55.833] 
W0915 13:30:55.834] 
W0915 13:30:55.834] Examples:
W0915 13:30:55.834]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0915 13:30:55.834]   kubectl autoscale deployment foo --min=2 --max=10
W0915 13:30:55.834]   
... skipping 54 lines ...
I0915 13:30:56.068]           limits:
I0915 13:30:56.068]             cpu: 300m
I0915 13:30:56.069]           requests:
I0915 13:30:56.069]             cpu: 300m
I0915 13:30:56.069]       terminationGracePeriodSeconds: 0
I0915 13:30:56.070] status: {}
W0915 13:30:56.170] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0915 13:30:56.297] deployment.apps/nginx-deployment-resources created
I0915 13:30:56.395] core.sh:1265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I0915 13:30:56.493] (Bcore.sh:1266: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0915 13:30:56.582] (Bcore.sh:1267: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0915 13:30:56.677] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
W0915 13:30:56.778] I0915 13:30:56.301107   52825 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment-resources", UID:"f59a124b-166a-4fde-a029-fef5d091c057", APIVersion:"apps/v1", ResourceVersion:"1834", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-67f8cfff5 to 3
W0915 13:30:56.779] I0915 13:30:56.304439   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment-resources-67f8cfff5", UID:"b7ef274d-ef9d-4fe4-a386-79c1093980ce", APIVersion:"apps/v1", ResourceVersion:"1835", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-8stqc
W0915 13:30:56.779] I0915 13:30:56.307376   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment-resources-67f8cfff5", UID:"b7ef274d-ef9d-4fe4-a386-79c1093980ce", APIVersion:"apps/v1", ResourceVersion:"1835", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-65sxc
W0915 13:30:56.780] I0915 13:30:56.307825   52825 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment-resources-67f8cfff5", UID:"b7ef274d-ef9d-4fe4-a386-79c1093980ce", APIVersion:"apps/v1", ResourceVersion:"1835", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-7mvc4
W0915 13:30:56.780] E0915 13:30:56.437559   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:56.780] E0915 13:30:56.558319   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:56.780] E0915 13:30:56.663241   52825 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0915 13:30:56.781] I0915 13:30:56.680767   52825 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568554246-28137", Name:"nginx-deployment-resources", UID: