This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: WIP: refactor interpod affinity with the scheduling framework
ResultFAILURE
Tests 1 failed / 2470 succeeded
Started2019-08-14 11:09
Elapsed26m49s
Revision
Buildergke-prow-ssd-pool-1a225945-l1d7
Refs master:34791349
80898:b9c7479a
podca9857e4-be83-11e9-bc02-ae225b01b9ea
infra-commit381773791
podca9857e4-be83-11e9-bc02-ae225b01b9ea
repok8s.io/kubernetes
repo-commit6fa58b7117fb6af587639939cadb0a4d26e2c925
repos{u'k8s.io/kubernetes': u'master:34791349d656a9f8e45b7093012e29ad08782ffa,80898:b9c7479af97ddd319bc2767e2fcca07fb52e814d'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptWithPermitPlugin 1m4s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptWithPermitPlugin$
=== RUN   TestPreemptWithPermitPlugin
I0814 11:32:04.558014  110801 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0814 11:32:04.558037  110801 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0814 11:32:04.558049  110801 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0814 11:32:04.558059  110801 master.go:234] Using reconciler: 
I0814 11:32:04.559979  110801 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.560076  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.560088  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.560127  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.560195  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.560588  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.560738  110801 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0814 11:32:04.560768  110801 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.560812  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.560923  110801 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0814 11:32:04.561023  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.561039  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.561072  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.561114  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.561429  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.561571  110801 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 11:32:04.561601  110801 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.561666  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.561677  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.561707  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.561766  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.561798  110801 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 11:32:04.561980  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.562697  110801 watch_cache.go:405] Replace watchCache (rev: 29445) 
I0814 11:32:04.563169  110801 watch_cache.go:405] Replace watchCache (rev: 29445) 
I0814 11:32:04.564118  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.564292  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.564497  110801 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0814 11:32:04.564606  110801 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0814 11:32:04.564647  110801 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.564730  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.564739  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.564781  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.564853  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.565768  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.565782  110801 watch_cache.go:405] Replace watchCache (rev: 29445) 
I0814 11:32:04.565889  110801 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0814 11:32:04.566037  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.566043  110801 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.566101  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.566111  110801 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0814 11:32:04.566114  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.566160  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.566203  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.566439  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.566508  110801 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0814 11:32:04.566691  110801 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.566760  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.566769  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.566807  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.566846  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.566875  110801 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0814 11:32:04.566964  110801 watch_cache.go:405] Replace watchCache (rev: 29445) 
I0814 11:32:04.567071  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.567347  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.567429  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.567455  110801 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0814 11:32:04.567501  110801 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0814 11:32:04.567594  110801 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.567652  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.567661  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.567689  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.567792  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.567859  110801 watch_cache.go:405] Replace watchCache (rev: 29445) 
I0814 11:32:04.568010  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.568041  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.568089  110801 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0814 11:32:04.568136  110801 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0814 11:32:04.568205  110801 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.568277  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.568288  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.568315  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.568362  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.568718  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.568819  110801 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0814 11:32:04.568957  110801 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.569037  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.569047  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.569075  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.569124  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.569152  110801 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0814 11:32:04.569306  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.569586  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.569695  110801 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0814 11:32:04.569811  110801 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.569873  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.569887  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.569913  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.569949  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.569981  110801 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0814 11:32:04.570161  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.570621  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.570699  110801 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0814 11:32:04.570815  110801 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.570877  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.570887  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.570914  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.570957  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.570981  110801 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0814 11:32:04.571133  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.571505  110801 watch_cache.go:405] Replace watchCache (rev: 29445) 
I0814 11:32:04.571561  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.571662  110801 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0814 11:32:04.571788  110801 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.571806  110801 watch_cache.go:405] Replace watchCache (rev: 29445) 
I0814 11:32:04.571845  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.571856  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.571884  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.571885  110801 watch_cache.go:405] Replace watchCache (rev: 29445) 
I0814 11:32:04.571934  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.571961  110801 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0814 11:32:04.572239  110801 watch_cache.go:405] Replace watchCache (rev: 29445) 
I0814 11:32:04.573110  110801 watch_cache.go:405] Replace watchCache (rev: 29445) 
I0814 11:32:04.573673  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.573981  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.574087  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.574090  110801 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0814 11:32:04.574130  110801 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0814 11:32:04.574223  110801 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.574281  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.574298  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.574327  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.574442  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.574496  110801 watch_cache.go:405] Replace watchCache (rev: 29445) 
I0814 11:32:04.574764  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.574794  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.574869  110801 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0814 11:32:04.575006  110801 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.575063  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.575072  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.575103  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.575140  110801 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0814 11:32:04.575398  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.575941  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.575977  110801 watch_cache.go:405] Replace watchCache (rev: 29446) 
I0814 11:32:04.576048  110801 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0814 11:32:04.576070  110801 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.576147  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.576157  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.576186  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.576210  110801 watch_cache.go:405] Replace watchCache (rev: 29446) 
I0814 11:32:04.576241  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.576267  110801 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0814 11:32:04.576493  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.576762  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.576863  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.576873  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.576901  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.576945  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.576988  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.577596  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.577669  110801 watch_cache.go:405] Replace watchCache (rev: 29446) 
I0814 11:32:04.577766  110801 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.577828  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.577839  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.577870  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.577838  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.577922  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.578210  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.578336  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.578496  110801 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0814 11:32:04.578550  110801 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0814 11:32:04.579140  110801 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.579334  110801 watch_cache.go:405] Replace watchCache (rev: 29446) 
I0814 11:32:04.579325  110801 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.580243  110801 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.581117  110801 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.581868  110801 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.582564  110801 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.582945  110801 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.583057  110801 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.583246  110801 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.584090  110801 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.584662  110801 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.584846  110801 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.585644  110801 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.585950  110801 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.586479  110801 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.586950  110801 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.587554  110801 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.587721  110801 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.587866  110801 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.587962  110801 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.588112  110801 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.588219  110801 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.588374  110801 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.589013  110801 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.589230  110801 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.590107  110801 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.590982  110801 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.591231  110801 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.591504  110801 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.592288  110801 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.592495  110801 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.593120  110801 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.593955  110801 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.594544  110801 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.595217  110801 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.595431  110801 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.595555  110801 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0814 11:32:04.595580  110801 master.go:434] Enabling API group "authentication.k8s.io".
I0814 11:32:04.595597  110801 master.go:434] Enabling API group "authorization.k8s.io".
I0814 11:32:04.595789  110801 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.595887  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.595901  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.595946  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.595999  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.596579  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.596731  110801 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 11:32:04.596897  110801 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.596967  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.596978  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.597011  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.597071  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.597111  110801 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 11:32:04.597340  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.597703  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.597822  110801 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 11:32:04.597950  110801 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.598007  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.598017  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.598049  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.598087  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.598124  110801 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 11:32:04.598285  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.598601  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.598701  110801 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 11:32:04.598716  110801 master.go:434] Enabling API group "autoscaling".
I0814 11:32:04.598790  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.598837  110801 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 11:32:04.599826  110801 watch_cache.go:405] Replace watchCache (rev: 29446) 
I0814 11:32:04.600063  110801 watch_cache.go:405] Replace watchCache (rev: 29446) 
I0814 11:32:04.600807  110801 watch_cache.go:405] Replace watchCache (rev: 29446) 
I0814 11:32:04.600802  110801 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.600888  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.600899  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.600930  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.600987  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.601256  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.601373  110801 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0814 11:32:04.601496  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.601562  110801 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0814 11:32:04.601673  110801 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.601735  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.601744  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.601773  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.601826  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.603306  110801 watch_cache.go:405] Replace watchCache (rev: 29446) 
I0814 11:32:04.604071  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.604148  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.604853  110801 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0814 11:32:04.604880  110801 master.go:434] Enabling API group "batch".
I0814 11:32:04.604963  110801 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0814 11:32:04.605807  110801 watch_cache.go:405] Replace watchCache (rev: 29446) 
I0814 11:32:04.606461  110801 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.606578  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.606591  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.606813  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.606883  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.607808  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.607983  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.608105  110801 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0814 11:32:04.608135  110801 master.go:434] Enabling API group "certificates.k8s.io".
I0814 11:32:04.608262  110801 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0814 11:32:04.609107  110801 watch_cache.go:405] Replace watchCache (rev: 29446) 
I0814 11:32:04.608299  110801 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.609483  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.609493  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.609568  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.609766  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.610094  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.610149  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.610205  110801 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 11:32:04.610282  110801 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 11:32:04.610339  110801 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.610404  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.610414  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.610443  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.610486  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.610723  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.610790  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.610821  110801 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 11:32:04.610834  110801 master.go:434] Enabling API group "coordination.k8s.io".
I0814 11:32:04.610873  110801 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 11:32:04.610963  110801 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.611028  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.611038  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.611079  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.611120  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.611368  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.611412  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.611464  110801 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 11:32:04.611481  110801 master.go:434] Enabling API group "extensions".
I0814 11:32:04.611515  110801 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 11:32:04.611634  110801 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.611694  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.611703  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.611744  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.611794  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.612011  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.612072  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.612098  110801 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0814 11:32:04.612120  110801 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0814 11:32:04.612217  110801 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.612270  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.612280  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.612315  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.612366  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.612796  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.612857  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.612971  110801 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 11:32:04.612984  110801 master.go:434] Enabling API group "networking.k8s.io".
I0814 11:32:04.613026  110801 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 11:32:04.613180  110801 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.613245  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.613261  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.613357  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.613397  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.613706  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.613818  110801 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0814 11:32:04.613833  110801 master.go:434] Enabling API group "node.k8s.io".
I0814 11:32:04.613953  110801 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.614011  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.614020  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.614057  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.614090  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.614130  110801 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0814 11:32:04.614321  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.615888  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.615971  110801 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0814 11:32:04.616092  110801 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.616143  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.616152  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.616180  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.616216  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.616241  110801 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0814 11:32:04.616432  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.616485  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.616854  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.616855  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.616891  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.616998  110801 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0814 11:32:04.617009  110801 master.go:434] Enabling API group "policy".
I0814 11:32:04.617037  110801 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.617057  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.617089  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.617098  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.617123  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.617162  110801 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0814 11:32:04.617305  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.617497  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.617614  110801 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 11:32:04.617640  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.617748  110801 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.617786  110801 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 11:32:04.617810  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.617821  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.617848  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.617956  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.618001  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.618214  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.618300  110801 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 11:32:04.618326  110801 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.618384  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.618393  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.618424  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.618452  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.618495  110801 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 11:32:04.618714  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.618920  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.618991  110801 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 11:32:04.619115  110801 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.619164  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.619173  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.619199  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.619235  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.619259  110801 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 11:32:04.619451  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.619851  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.619925  110801 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 11:32:04.619959  110801 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.620013  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.620022  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.620047  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.620085  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.620109  110801 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 11:32:04.620290  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.620560  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.620875  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.621166  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.621647  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.621656  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.621738  110801 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 11:32:04.621828  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.621851  110801 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.621919  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.621928  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.621955  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.621979  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.621989  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.622016  110801 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 11:32:04.622211  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.622421  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.622506  110801 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 11:32:04.622552  110801 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.622610  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.622623  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.622649  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.622702  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.622733  110801 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 11:32:04.622892  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.623127  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.623195  110801 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 11:32:04.623232  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.623329  110801 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.623381  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.623390  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.623414  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.623442  110801 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 11:32:04.623594  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.623809  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.623885  110801 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 11:32:04.623905  110801 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0814 11:32:04.624510  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.625047  110801 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 11:32:04.625423  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.626458  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.626543  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.626968  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.627051  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.628804  110801 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.628887  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.628897  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.628927  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.628994  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.629250  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.629344  110801 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 11:32:04.629485  110801 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.629563  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.629584  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.629610  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.629708  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.629739  110801 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 11:32:04.629882  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.630094  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.630097  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.630165  110801 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 11:32:04.630177  110801 master.go:434] Enabling API group "scheduling.k8s.io".
I0814 11:32:04.630314  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.630360  110801 master.go:423] Skipping disabled API group "settings.k8s.io".
I0814 11:32:04.630373  110801 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 11:32:04.630484  110801 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.630557  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.630567  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.630640  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.630683  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.630879  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.630930  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.630964  110801 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 11:32:04.631077  110801 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.631151  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.631161  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.631171  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.631187  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.631250  110801 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 11:32:04.631278  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.631454  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.631735  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.631778  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.631818  110801 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 11:32:04.631845  110801 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.631900  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.631909  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.631959  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.632286  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.632576  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.632670  110801 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0814 11:32:04.632695  110801 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.632742  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.632752  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.632762  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.632786  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.632824  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.632850  110801 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0814 11:32:04.633006  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.634008  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.634088  110801 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0814 11:32:04.634207  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.634222  110801 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.634270  110801 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0814 11:32:04.634279  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.634289  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.634323  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.634447  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.634643  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.634674  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.634738  110801 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 11:32:04.634779  110801 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 11:32:04.634890  110801 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.634942  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.634952  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.634981  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.635078  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.635311  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.635388  110801 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 11:32:04.635392  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.635414  110801 master.go:434] Enabling API group "storage.k8s.io".
I0814 11:32:04.635434  110801 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 11:32:04.635560  110801 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.635629  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.635638  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.635666  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.635778  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.635980  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.636335  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.636497  110801 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0814 11:32:04.636630  110801 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.636684  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.636843  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.636905  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.636937  110801 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0814 11:32:04.637103  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.637302  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.637384  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.637424  110801 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0814 11:32:04.637450  110801 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0814 11:32:04.637560  110801 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.637610  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.637619  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.637671  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.637723  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.638217  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.638354  110801 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0814 11:32:04.638473  110801 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.638520  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.638545  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.638565  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.638600  110801 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0814 11:32:04.638621  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.638761  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.639000  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.639033  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.639088  110801 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0814 11:32:04.639194  110801 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.639211  110801 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0814 11:32:04.639244  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.639253  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.639334  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.639450  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.639726  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.639838  110801 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0814 11:32:04.639856  110801 master.go:434] Enabling API group "apps".
I0814 11:32:04.639887  110801 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.639952  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.639968  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.639995  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.640036  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.640069  110801 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0814 11:32:04.640312  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.640524  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.640624  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.640666  110801 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 11:32:04.640689  110801 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 11:32:04.640689  110801 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.640744  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.640752  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.640777  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.640914  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.641126  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.641136  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.641196  110801 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 11:32:04.641222  110801 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.641275  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.641285  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.641312  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.641346  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.641371  110801 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 11:32:04.641400  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.641620  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.641786  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.641846  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.642214  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.642398  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.642523  110801 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 11:32:04.642608  110801 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.642697  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.642732  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.642801  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.642552  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.642885  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.642742  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.642856  110801 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 11:32:04.643012  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.642701  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.643219  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.643303  110801 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 11:32:04.643305  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.643315  110801 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0814 11:32:04.643337  110801 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 11:32:04.643356  110801 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.643520  110801 client.go:354] parsed scheme: ""
I0814 11:32:04.643548  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:04.643579  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:04.643856  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.644326  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:04.644434  110801 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 11:32:04.644447  110801 master.go:434] Enabling API group "events.k8s.io".
I0814 11:32:04.644696  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:04.644725  110801 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 11:32:04.644768  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.644714  110801 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.644998  110801 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.645150  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.645234  110801 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.645336  110801 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.645423  110801 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.645511  110801 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.645693  110801 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.645783  110801 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.645945  110801 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.646067  110801 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.647036  110801 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.647204  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.647283  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.647297  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.648051  110801 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.648641  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.648897  110801 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.649112  110801 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.649740  110801 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 11:32:04.649855  110801 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.650124  110801 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.650723  110801 watch_cache.go:405] Replace watchCache (rev: 29447) 
I0814 11:32:04.650912  110801 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.651113  110801 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.651837  110801 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.652052  110801 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 11:32:04.652094  110801 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0814 11:32:04.652692  110801 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.652811  110801 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.652997  110801 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.653702  110801 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.654388  110801 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.655293  110801 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.655603  110801 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.656463  110801 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.657229  110801 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.657462  110801 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.658060  110801 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 11:32:04.658127  110801 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0814 11:32:04.658955  110801 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.659244  110801 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.659777  110801 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.660364  110801 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.660914  110801 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.661494  110801 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.662212  110801 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.662774  110801 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.663289  110801 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.663970  110801 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.664562  110801 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 11:32:04.664619  110801 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0814 11:32:04.665137  110801 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.665777  110801 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 11:32:04.665836  110801 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0814 11:32:04.666365  110801 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.666853  110801 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.667057  110801 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.667579  110801 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.668000  110801 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.668428  110801 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.668990  110801 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 11:32:04.669051  110801 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0814 11:32:04.669801  110801 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.670499  110801 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.670747  110801 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.671550  110801 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.671830  110801 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.672106  110801 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.672974  110801 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.673228  110801 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.673475  110801 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.674224  110801 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.674582  110801 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.674829  110801 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 11:32:04.674894  110801 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0814 11:32:04.674901  110801 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0814 11:32:04.675681  110801 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.676323  110801 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.676996  110801 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.677671  110801 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.678582  110801 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"97307195-266e-4e61-8b36-750e76a318ee", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:32:04.681357  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:04.681388  110801 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0814 11:32:04.681399  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:04.681410  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:04.681420  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:04.681428  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:04.681463  110801 httplog.go:90] GET /healthz: (216.866µs) 0 [Go-http-client/1.1 127.0.0.1:53478]
I0814 11:32:04.683041  110801 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.78509ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53480]
I0814 11:32:04.685439  110801 httplog.go:90] GET /api/v1/services: (1.065212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53480]
I0814 11:32:04.689070  110801 httplog.go:90] GET /api/v1/services: (954.651µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53480]
I0814 11:32:04.692743  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:04.692766  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:04.692784  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:04.692794  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:04.692802  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:04.692825  110801 httplog.go:90] GET /healthz: (176.201µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:04.693424  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.200661ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53480]
I0814 11:32:04.695561  110801 httplog.go:90] GET /api/v1/services: (1.05494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:04.695961  110801 httplog.go:90] GET /api/v1/services: (1.304539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:04.696267  110801 httplog.go:90] POST /api/v1/namespaces: (2.322328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53480]
I0814 11:32:04.697827  110801 httplog.go:90] GET /api/v1/namespaces/kube-public: (919.947µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53480]
I0814 11:32:04.699507  110801 httplog.go:90] POST /api/v1/namespaces: (1.350467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:04.700488  110801 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (717.536µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:04.701931  110801 httplog.go:90] POST /api/v1/namespaces: (1.158993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:04.782215  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:04.782267  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:04.782281  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:04.782297  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:04.782306  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:04.782335  110801 httplog.go:90] GET /healthz: (266.579µs) 0 [Go-http-client/1.1 127.0.0.1:53478]
I0814 11:32:04.793651  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:04.793691  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:04.793705  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:04.793716  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:04.793724  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:04.793761  110801 httplog.go:90] GET /healthz: (313.287µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:04.882204  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:04.882243  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:04.882262  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:04.882273  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:04.882282  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:04.882310  110801 httplog.go:90] GET /healthz: (258.697µs) 0 [Go-http-client/1.1 127.0.0.1:53478]
I0814 11:32:04.893491  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:04.893521  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:04.893560  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:04.893571  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:04.893579  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:04.893635  110801 httplog.go:90] GET /healthz: (304.227µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:04.982302  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:04.982341  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:04.982354  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:04.982364  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:04.982372  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:04.982418  110801 httplog.go:90] GET /healthz: (278.713µs) 0 [Go-http-client/1.1 127.0.0.1:53478]
I0814 11:32:04.993598  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:04.993629  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:04.993639  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:04.993646  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:04.993652  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:04.993682  110801 httplog.go:90] GET /healthz: (287.589µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:05.082191  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:05.082226  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.082238  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:05.082247  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:05.082255  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:05.082286  110801 httplog.go:90] GET /healthz: (259.239µs) 0 [Go-http-client/1.1 127.0.0.1:53478]
I0814 11:32:05.093561  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:05.093589  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.093598  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:05.093605  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:05.093622  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:05.093648  110801 httplog.go:90] GET /healthz: (264.589µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:05.182134  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:05.182169  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.182181  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:05.182190  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:05.182198  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:05.182225  110801 httplog.go:90] GET /healthz: (190.658µs) 0 [Go-http-client/1.1 127.0.0.1:53478]
I0814 11:32:05.193500  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:05.193558  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.193573  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:05.193583  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:05.193591  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:05.193664  110801 httplog.go:90] GET /healthz: (289.532µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:05.282223  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:05.282264  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.282277  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:05.282288  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:05.282304  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:05.282345  110801 httplog.go:90] GET /healthz: (270.562µs) 0 [Go-http-client/1.1 127.0.0.1:53478]
I0814 11:32:05.293654  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:05.293739  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.293753  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:05.293763  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:05.293772  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:05.293841  110801 httplog.go:90] GET /healthz: (408.613µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:05.382230  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:05.382278  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.382305  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:05.382315  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:05.382323  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:05.382366  110801 httplog.go:90] GET /healthz: (282.956µs) 0 [Go-http-client/1.1 127.0.0.1:53478]
I0814 11:32:05.393569  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:05.393608  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.393621  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:05.393631  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:05.393639  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:05.393695  110801 httplog.go:90] GET /healthz: (298.012µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:05.482289  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:05.482321  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.482330  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:05.482337  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:05.482342  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:05.482370  110801 httplog.go:90] GET /healthz: (218.24µs) 0 [Go-http-client/1.1 127.0.0.1:53478]
I0814 11:32:05.493621  110801 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:32:05.493659  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.493673  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:05.493684  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:05.493692  110801 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:05.493801  110801 httplog.go:90] GET /healthz: (322.54µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:05.558351  110801 client.go:354] parsed scheme: ""
I0814 11:32:05.558382  110801 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:32:05.558437  110801 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:32:05.558504  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:05.559081  110801 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:32:05.559147  110801 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:32:05.583471  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.583499  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:05.583510  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:05.583519  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:05.583590  110801 httplog.go:90] GET /healthz: (1.467783ms) 0 [Go-http-client/1.1 127.0.0.1:53478]
I0814 11:32:05.594283  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.594315  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:05.594326  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:05.594334  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:05.594376  110801 httplog.go:90] GET /healthz: (1.045326ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:05.682938  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.410684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:05.683186  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.683203  110801 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:32:05.683215  110801 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:32:05.683223  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:32:05.683248  110801 httplog.go:90] GET /healthz: (736.146µs) 0 [Go-http-client/1.1 127.0.0.1:53500]
I0814 11:32:05.683478  110801 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.424335ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.683665  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.057505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.685281  110801 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.026007ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.685569  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.799326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:05.685943  110801 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.028893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.686098  110801 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 11:32:05.687867  110801 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.647521ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.688507  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.640681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53478]
I0814 11:32:05.690318  110801 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.581401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.690465  110801 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (4.51799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.690560  110801 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 11:32:05.690576  110801 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0814 11:32:05.691003  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.129598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53500]
I0814 11:32:05.692035  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (797.169µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.693524  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (950.442µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.694163  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.694196  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:05.694219  110801 httplog.go:90] GET /healthz: (927.03µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.694903  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (991.664µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.695988  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (738.155µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.697057  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (759.888µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.698116  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (719.8µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.700150  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.682915ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.700372  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0814 11:32:05.701182  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (627.289µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.703021  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.379696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.703327  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0814 11:32:05.704371  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (887.567µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.706026  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.17451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.706190  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0814 11:32:05.707119  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (745.897µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.708995  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.472723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.709218  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0814 11:32:05.710202  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (721.451µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.711779  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.17742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.711965  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0814 11:32:05.712919  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (747.792µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.714587  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.304907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.714737  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0814 11:32:05.715430  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (533.164µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.716828  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.070277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.716989  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0814 11:32:05.717931  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (777.505µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.719922  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.553915ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.720101  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0814 11:32:05.720888  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (618.167µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.723067  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.754245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.723358  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0814 11:32:05.724393  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (715.964µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.726056  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.235744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.726424  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0814 11:32:05.727268  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (620.325µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.728942  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.28195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.729124  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0814 11:32:05.730117  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (761.496µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.732161  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.650564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.732683  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0814 11:32:05.734252  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (987.268µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.735924  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.387499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.736283  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0814 11:32:05.737289  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (678.493µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.739033  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.277477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.739360  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0814 11:32:05.740550  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (997.691µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.742417  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.497624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.742730  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0814 11:32:05.743755  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (791.394µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.745457  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.217215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.745786  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0814 11:32:05.746800  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (790.052µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.748864  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.547771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.749023  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0814 11:32:05.750046  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (828.991µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.751862  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.407431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.752024  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0814 11:32:05.753617  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.318095ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.755502  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.549728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.755746  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 11:32:05.756633  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (690.032µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.758184  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.190008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.758370  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0814 11:32:05.759484  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (878.581µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.761547  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.636046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.761952  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0814 11:32:05.762976  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (845.537µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.764724  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.308586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.765016  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0814 11:32:05.766050  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (900.166µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.767681  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.277552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.767934  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0814 11:32:05.769083  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (918.547µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.771167  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.585292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.771406  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0814 11:32:05.772638  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.020359ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.774701  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.484632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.774973  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0814 11:32:05.776094  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (920.747µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.778373  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.767737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.778780  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0814 11:32:05.779911  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (879.63µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.781897  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.60434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.782070  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0814 11:32:05.782609  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.782646  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:05.782672  110801 httplog.go:90] GET /healthz: (727.638µs) 0 [Go-http-client/1.1 127.0.0.1:53498]
I0814 11:32:05.783090  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (813.366µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.784962  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.451093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.785204  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 11:32:05.786374  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (949.675µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.788581  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.685069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.788736  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 11:32:05.790074  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (993.559µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.792652  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.998717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.792948  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 11:32:05.795180  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.795220  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (2.101524ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:05.795236  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:05.795328  110801 httplog.go:90] GET /healthz: (1.912481ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.797732  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.091674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.798064  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 11:32:05.799136  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (868.505µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.800926  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.426417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.801219  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 11:32:05.802181  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (782.312µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.803855  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.209856ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.804179  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 11:32:05.805240  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (798.726µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.806867  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.252905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.807195  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 11:32:05.808274  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (830.017µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.810069  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.327196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.810337  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 11:32:05.813612  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.811025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.815346  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.393651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.815557  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 11:32:05.816892  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.04357ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.819162  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.931548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.819387  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 11:32:05.820648  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.006143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.822738  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.651075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.822951  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0814 11:32:05.824035  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (895.026µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.825952  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.528611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.826145  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 11:32:05.827273  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (891.174µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.828949  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.291744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.829166  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0814 11:32:05.830271  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (882.128µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.832365  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.59375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.832626  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 11:32:05.834243  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.274249ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.836113  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.486373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.836457  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 11:32:05.837702  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (970.654µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.839559  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.4537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.839865  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 11:32:05.840909  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (848.178µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.842739  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.401942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.843052  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 11:32:05.844098  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (873.471µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.846075  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.561672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.846371  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 11:32:05.847365  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (785.354µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.848971  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.275199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.849222  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0814 11:32:05.850235  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (712.17µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.852677  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.054355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.852909  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 11:32:05.855108  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.935669ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.857065  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.583255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.857238  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0814 11:32:05.858497  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.065566ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.860337  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.431353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.860638  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 11:32:05.861950  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.132632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.863855  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.519359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.864099  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 11:32:05.900302  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.900342  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:05.900393  110801 httplog.go:90] GET /healthz: (18.48842ms) 0 [Go-http-client/1.1 127.0.0.1:53482]
I0814 11:32:05.900308  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.900460  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:05.900499  110801 httplog.go:90] GET /healthz: (6.661615ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:05.903657  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (21.968082ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0814 11:32:05.906061  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.445224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:05.906329  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 11:32:05.922776  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.155371ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:05.943657  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.038707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:05.943900  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 11:32:05.964553  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.130674ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:05.983764  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.056254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:05.984263  110801 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 11:32:05.985288  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.985336  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:05.985372  110801 httplog.go:90] GET /healthz: (2.115307ms) 0 [Go-http-client/1.1 127.0.0.1:53482]
I0814 11:32:05.994334  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:05.994359  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:05.994408  110801 httplog.go:90] GET /healthz: (1.078102ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.002753  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.209728ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.023600  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.023843  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0814 11:32:06.051855  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (6.142239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.064185  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.453272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.064548  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0814 11:32:06.082881  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.25581ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.083701  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.083728  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.083757  110801 httplog.go:90] GET /healthz: (1.324391ms) 0 [Go-http-client/1.1 127.0.0.1:53504]
I0814 11:32:06.094494  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.094555  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.094590  110801 httplog.go:90] GET /healthz: (978.483µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.104238  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.620306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.104687  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0814 11:32:06.123013  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.260793ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.144058  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.348714ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.144443  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0814 11:32:06.166110  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.156455ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.183642  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.183677  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.183703  110801 httplog.go:90] GET /healthz: (1.207319ms) 0 [Go-http-client/1.1 127.0.0.1:53482]
I0814 11:32:06.183648  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.999756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.183944  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0814 11:32:06.194625  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.194658  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.194694  110801 httplog.go:90] GET /healthz: (869.478µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.202935  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.339563ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.223706  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.076138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.224072  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 11:32:06.242949  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.318849ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.265252  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.960606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.265615  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0814 11:32:06.283099  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.283137  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.283172  110801 httplog.go:90] GET /healthz: (1.191727ms) 0 [Go-http-client/1.1 127.0.0.1:53482]
I0814 11:32:06.283268  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.668528ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.294444  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.294489  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.294566  110801 httplog.go:90] GET /healthz: (1.14609ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.303280  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.768803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.303487  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0814 11:32:06.322932  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.241035ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.343941  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.41391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.344149  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0814 11:32:06.362723  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.133243ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.383553  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.383590  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.383627  110801 httplog.go:90] GET /healthz: (1.698046ms) 0 [Go-http-client/1.1 127.0.0.1:53482]
I0814 11:32:06.383849  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.216156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.384039  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0814 11:32:06.394452  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.394482  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.394551  110801 httplog.go:90] GET /healthz: (1.169202ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.402854  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.322188ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.423547  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.9054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.423837  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 11:32:06.442992  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.20723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.463810  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.144646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.464052  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 11:32:06.482819  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.209796ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:06.483862  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.483895  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.483938  110801 httplog.go:90] GET /healthz: (1.771724ms) 0 [Go-http-client/1.1 127.0.0.1:53482]
I0814 11:32:06.494217  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.494246  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.494483  110801 httplog.go:90] GET /healthz: (1.046641ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.503219  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.731256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.503403  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 11:32:06.522831  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.195865ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.543897  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.221793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.544161  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 11:32:06.562851  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.156858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.582860  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.582891  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.582925  110801 httplog.go:90] GET /healthz: (973.443µs) 0 [Go-http-client/1.1 127.0.0.1:53504]
I0814 11:32:06.583736  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.158765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.583984  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 11:32:06.595029  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.595057  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.595345  110801 httplog.go:90] GET /healthz: (2.042587ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.605103  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (3.517898ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.624140  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.492846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.624425  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 11:32:06.642979  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.395382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.663899  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.304848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.664101  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 11:32:06.683173  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.683205  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.683267  110801 httplog.go:90] GET /healthz: (1.271812ms) 0 [Go-http-client/1.1 127.0.0.1:53504]
I0814 11:32:06.684296  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.933898ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.694334  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.694366  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.694403  110801 httplog.go:90] GET /healthz: (1.058974ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.707364  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.548391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.707641  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 11:32:06.723028  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.392881ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.744203  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.513697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.744427  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 11:32:06.770207  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.448667ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.782968  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.783009  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.783050  110801 httplog.go:90] GET /healthz: (1.075721ms) 0 [Go-http-client/1.1 127.0.0.1:53504]
I0814 11:32:06.784620  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.919037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.784877  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 11:32:06.794543  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.794571  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.794604  110801 httplog.go:90] GET /healthz: (1.232741ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.802876  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.364524ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.823519  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.887276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.823970  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0814 11:32:06.842986  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.328563ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.864161  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.543244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.865830  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 11:32:06.883240  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.647856ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.883586  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.883612  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.883676  110801 httplog.go:90] GET /healthz: (1.730744ms) 0 [Go-http-client/1.1 127.0.0.1:53504]
I0814 11:32:06.894510  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.894568  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.894615  110801 httplog.go:90] GET /healthz: (1.14513ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.903961  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.446344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.904345  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0814 11:32:06.923024  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.406241ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.944236  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.618072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.944460  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 11:32:06.964017  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.390601ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.982901  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.982935  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.982979  110801 httplog.go:90] GET /healthz: (1.060418ms) 0 [Go-http-client/1.1 127.0.0.1:53504]
I0814 11:32:06.984708  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.136274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:06.984903  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 11:32:06.994355  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:06.994562  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:06.994607  110801 httplog.go:90] GET /healthz: (1.266571ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.003762  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (2.102337ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.023665  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.019486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.023945  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 11:32:07.042992  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.323299ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.063914  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.003076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.064441  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 11:32:07.082879  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.082912  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.082948  110801 httplog.go:90] GET /healthz: (937.877µs) 0 [Go-http-client/1.1 127.0.0.1:53504]
I0814 11:32:07.083005  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.423433ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.094580  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.094635  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.094675  110801 httplog.go:90] GET /healthz: (1.323744ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.103952  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.44273ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.104357  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 11:32:07.122931  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.404648ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.143750  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.151886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.143974  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0814 11:32:07.162837  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.245432ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.182732  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.182764  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.182821  110801 httplog.go:90] GET /healthz: (894.823µs) 0 [Go-http-client/1.1 127.0.0.1:53482]
I0814 11:32:07.184171  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.42752ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.184408  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 11:32:07.194581  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.194612  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.194659  110801 httplog.go:90] GET /healthz: (1.310149ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.205180  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (3.520644ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.223718  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.105023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.223964  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0814 11:32:07.243072  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.526768ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.264270  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.690922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.264481  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 11:32:07.283118  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.3336ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.283247  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.283284  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.283418  110801 httplog.go:90] GET /healthz: (1.349852ms) 0 [Go-http-client/1.1 127.0.0.1:53482]
I0814 11:32:07.294360  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.294396  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.294450  110801 httplog.go:90] GET /healthz: (1.108532ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.303849  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.303741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.304067  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 11:32:07.323143  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.490316ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.343442  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.77782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.343733  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 11:32:07.362896  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.309693ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.383046  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.383080  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.383117  110801 httplog.go:90] GET /healthz: (1.122259ms) 0 [Go-http-client/1.1 127.0.0.1:53504]
I0814 11:32:07.383646  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.954671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.385412  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 11:32:07.395060  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.395200  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.395384  110801 httplog.go:90] GET /healthz: (1.962419ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.403149  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.531154ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.424044  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.293374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.424323  110801 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 11:32:07.443246  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.53244ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.445285  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.512976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.464652  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.999166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.465011  110801 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0814 11:32:07.487773  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (3.745843ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.487793  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.487828  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.487873  110801 httplog.go:90] GET /healthz: (3.818983ms) 0 [Go-http-client/1.1 127.0.0.1:53504]
I0814 11:32:07.490461  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.14937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.494640  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.494664  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.494702  110801 httplog.go:90] GET /healthz: (1.424765ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.504833  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.249999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.505087  110801 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 11:32:07.524910  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (2.764816ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.526596  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.227592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.544272  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.157238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.544774  110801 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 11:32:07.565332  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.63476ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.570620  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.906981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.589372  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.589401  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.589432  110801 httplog.go:90] GET /healthz: (1.92459ms) 0 [Go-http-client/1.1 127.0.0.1:53504]
I0814 11:32:07.590385  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.499078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.590628  110801 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 11:32:07.594285  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.594315  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.594350  110801 httplog.go:90] GET /healthz: (1.010708ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.603123  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.58303ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.604747  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.15081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.623511  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.914432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.624341  110801 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 11:32:07.642908  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.283766ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.644416  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.121887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.663111  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.490304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.663349  110801 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 11:32:07.682738  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.206989ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.682808  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.682830  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.682859  110801 httplog.go:90] GET /healthz: (841.916µs) 0 [Go-http-client/1.1 127.0.0.1:53504]
I0814 11:32:07.684315  110801 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.053254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.694343  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.694377  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.694421  110801 httplog.go:90] GET /healthz: (1.074102ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.703485  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.956645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.703975  110801 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 11:32:07.723046  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.371051ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.724815  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.270064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.744451  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.465889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.744744  110801 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0814 11:32:07.762950  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.273337ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.764743  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.264667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.783109  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.783144  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.783178  110801 httplog.go:90] GET /healthz: (1.081623ms) 0 [Go-http-client/1.1 127.0.0.1:53504]
I0814 11:32:07.783974  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.79331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.784188  110801 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 11:32:07.794665  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.794700  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.794751  110801 httplog.go:90] GET /healthz: (1.232203ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.802994  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.399802ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.805104  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.626535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.825359  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.50843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.825596  110801 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 11:32:07.843220  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.513186ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.845112  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.332973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.863937  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.315492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:07.864214  110801 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 11:32:07.883830  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.883871  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.883965  110801 httplog.go:90] GET /healthz: (1.976329ms) 0 [Go-http-client/1.1 127.0.0.1:53482]
I0814 11:32:07.884901  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.140075ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.887076  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.423083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.894256  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.894291  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.894336  110801 httplog.go:90] GET /healthz: (977.526µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.903574  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.962111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.903862  110801 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 11:32:07.923094  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.486153ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.924980  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.356366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.944119  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.455115ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.944412  110801 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 11:32:07.967374  110801 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.837373ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.969648  110801 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.823905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.982925  110801 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:32:07.982952  110801 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:32:07.982999  110801 httplog.go:90] GET /healthz: (1.108376ms) 0 [Go-http-client/1.1 127.0.0.1:53482]
I0814 11:32:07.984191  110801 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.616832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:07.984407  110801 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 11:32:08.001112  110801 httplog.go:90] GET /healthz: (7.752731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:08.003093  110801 httplog.go:90] GET /api/v1/namespaces/default: (1.49497ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:08.005373  110801 httplog.go:90] POST /api/v1/namespaces: (1.888161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:08.006906  110801 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.170551ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:08.010876  110801 httplog.go:90] POST /api/v1/namespaces/default/services: (3.368434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:08.012607  110801 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.126808ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:08.015616  110801 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.47729ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:08.083267  110801 httplog.go:90] GET /healthz: (1.126071ms) 200 [Go-http-client/1.1 127.0.0.1:53504]
W0814 11:32:08.083988  110801 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:32:08.084017  110801 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:32:08.084040  110801 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:32:08.084061  110801 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:32:08.084078  110801 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:32:08.084088  110801 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:32:08.084099  110801 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:32:08.084109  110801 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:32:08.084119  110801 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:32:08.084161  110801 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:32:08.084171  110801 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 11:32:08.084193  110801 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0814 11:32:08.084203  110801 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0814 11:32:08.084715  110801 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.084742  110801 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.084819  110801 reflector.go:122] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.084836  110801 reflector.go:160] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.084874  110801 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.084893  110801 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085070  110801 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085095  110801 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085099  110801 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085112  110801 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085251  110801 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085329  110801 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085363  110801 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085376  110801 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085516  110801 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085554  110801 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085604  110801 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085617  110801 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085737  110801 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.085759  110801 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.086092  110801 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.086110  110801 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0814 11:32:08.087506  110801 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (479.16µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53828]
I0814 11:32:08.087558  110801 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (643.068µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:32:08.088039  110801 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (380.535µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53812]
I0814 11:32:08.088147  110801 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (489.989µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:08.088598  110801 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (468.846µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53814]
I0814 11:32:08.089051  110801 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (366.897µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53816]
I0814 11:32:08.089611  110801 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (463.398µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53818]
I0814 11:32:08.090151  110801 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (425.081µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53824]
I0814 11:32:08.090387  110801 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=29447 labels= fields= timeout=7m37s
I0814 11:32:08.090655  110801 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (397.07µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:32:08.090973  110801 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (403.811µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53826]
I0814 11:32:08.091111  110801 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=29445 labels= fields= timeout=8m45s
I0814 11:32:08.091317  110801 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=29447 labels= fields= timeout=5m54s
I0814 11:32:08.091638  110801 get.go:250] Starting watch for /api/v1/nodes, rv=29445 labels= fields= timeout=8m48s
I0814 11:32:08.091838  110801 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=29447 labels= fields= timeout=8m23s
I0814 11:32:08.091857  110801 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (392.435µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53812]
I0814 11:32:08.092053  110801 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=29447 labels= fields= timeout=5m38s
I0814 11:32:08.092274  110801 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=29445 labels= fields= timeout=6m45s
I0814 11:32:08.092287  110801 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=29446 labels= fields= timeout=5m32s
I0814 11:32:08.092297  110801 get.go:250] Starting watch for /api/v1/pods, rv=29446 labels= fields= timeout=9m29s
I0814 11:32:08.092595  110801 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=29447 labels= fields= timeout=9m2s
I0814 11:32:08.092789  110801 get.go:250] Starting watch for /api/v1/services, rv=29728 labels= fields= timeout=7m26s
I0814 11:32:08.184657  110801 shared_informer.go:211] caches populated
I0814 11:32:08.284875  110801 shared_informer.go:211] caches populated
I0814 11:32:08.385069  110801 shared_informer.go:211] caches populated
I0814 11:32:08.485213  110801 shared_informer.go:211] caches populated
I0814 11:32:08.585406  110801 shared_informer.go:211] caches populated
I0814 11:32:08.685599  110801 shared_informer.go:211] caches populated
I0814 11:32:08.785862  110801 shared_informer.go:211] caches populated
I0814 11:32:08.886084  110801 shared_informer.go:211] caches populated
I0814 11:32:08.986293  110801 shared_informer.go:211] caches populated
I0814 11:32:09.086487  110801 shared_informer.go:211] caches populated
I0814 11:32:09.089319  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:09.090235  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:09.090477  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:09.090754  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:09.091580  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:09.092028  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:09.092391  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:09.186649  110801 shared_informer.go:211] caches populated
I0814 11:32:09.286871  110801 shared_informer.go:211] caches populated
I0814 11:32:09.289910  110801 httplog.go:90] POST /api/v1/nodes: (2.553983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:09.290124  110801 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0814 11:32:09.292272  110801 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods: (1.861558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:09.292720  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/waiting-pod
I0814 11:32:09.292741  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/waiting-pod
I0814 11:32:09.292898  110801 scheduler_binder.go:256] AssumePodVolumes for pod "preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/waiting-pod", node "test-node-0"
I0814 11:32:09.292921  110801 scheduler_binder.go:266] AssumePodVolumes for pod "preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/waiting-pod", node "test-node-0": all PVCs bound and nothing to do
I0814 11:32:09.292984  110801 framework.go:562] waiting for 30s for pod "waiting-pod" at permit
I0814 11:32:09.295777  110801 factory.go:615] Attempting to bind signalling-pod to test-node-1
I0814 11:32:09.295806  110801 factory.go:615] Attempting to bind waiting-pod to test-node-0
I0814 11:32:09.296152  110801 scheduler.go:447] Failed to bind pod: permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/signalling-pod
E0814 11:32:09.296165  110801 scheduler.go:449] scheduler cache ForgetPod failed: pod 91180fb7-c4fb-412d-82fb-b5c58063cf22 wasn't assumed so cannot be forgotten
E0814 11:32:09.296177  110801 scheduler.go:605] error binding pod: Post http://127.0.0.1:39269/api/v1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/pods/signalling-pod/binding: dial tcp 127.0.0.1:39269: connect: connection refused
E0814 11:32:09.296196  110801 factory.go:566] Error scheduling permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/signalling-pod: Post http://127.0.0.1:39269/api/v1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/pods/signalling-pod/binding: dial tcp 127.0.0.1:39269: connect: connection refused; retrying
I0814 11:32:09.296218  110801 factory.go:624] Updating pod condition for permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0814 11:32:09.296449  110801 scheduler.go:280] Error updating the condition of the pod permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/signalling-pod: Put http://127.0.0.1:39269/api/v1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/pods/signalling-pod/status: dial tcp 127.0.0.1:39269: connect: connection refused
E0814 11:32:09.296663  110801 factory.go:599] Error getting pod permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/signalling-pod for retry: Get http://127.0.0.1:39269/api/v1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/pods/signalling-pod: dial tcp 127.0.0.1:39269: connect: connection refused; retrying...
E0814 11:32:09.296938  110801 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39269/apis/events.k8s.io/v1beta1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/events: dial tcp 127.0.0.1:39269: connect: connection refused' (may retry after sleeping)
I0814 11:32:09.298299  110801 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/waiting-pod/binding: (2.182849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:09.298519  110801 scheduler.go:614] pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/waiting-pod is bound successfully on node "test-node-0", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>.".
I0814 11:32:09.300164  110801 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/events: (1.376857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
E0814 11:32:09.497224  110801 factory.go:599] Error getting pod permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/signalling-pod for retry: Get http://127.0.0.1:39269/api/v1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/pods/signalling-pod: dial tcp 127.0.0.1:39269: connect: connection refused; retrying...
E0814 11:32:09.897855  110801 factory.go:599] Error getting pod permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/signalling-pod for retry: Get http://127.0.0.1:39269/api/v1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/pods/signalling-pod: dial tcp 127.0.0.1:39269: connect: connection refused; retrying...
I0814 11:32:10.089498  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:10.090376  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:10.090581  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:10.090902  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:10.091709  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:10.092164  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:10.092523  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:32:10.698590  110801 factory.go:599] Error getting pod permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/signalling-pod for retry: Get http://127.0.0.1:39269/api/v1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/pods/signalling-pod: dial tcp 127.0.0.1:39269: connect: connection refused; retrying...
I0814 11:32:11.089718  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:11.090570  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:11.090722  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:11.091081  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:11.091926  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:11.092511  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:11.092662  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:12.089901  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:12.090798  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:12.090893  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:12.091968  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:12.092078  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:12.092629  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:12.092769  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:32:12.299182  110801 factory.go:599] Error getting pod permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/signalling-pod for retry: Get http://127.0.0.1:39269/api/v1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/pods/signalling-pod: dial tcp 127.0.0.1:39269: connect: connection refused; retrying...
I0814 11:32:13.090043  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:13.090946  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:13.091914  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:13.092109  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:13.092209  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:13.092880  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:13.092921  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:14.090235  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:14.091071  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:14.092022  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:14.092219  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:14.092338  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:14.093338  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:14.093371  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:15.090426  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:15.091225  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:15.092257  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:15.092338  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:15.092484  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:15.093559  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:15.093570  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:32:15.499781  110801 factory.go:599] Error getting pod permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/signalling-pod for retry: Get http://127.0.0.1:39269/api/v1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/pods/signalling-pod: dial tcp 127.0.0.1:39269: connect: connection refused; retrying...
I0814 11:32:16.090586  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:16.091413  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:16.092463  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:16.092586  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:16.092809  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:16.093714  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:16.095192  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:17.090779  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:17.091520  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:17.092615  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:17.092706  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:17.093194  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:17.093867  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:17.095365  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:18.003355  110801 httplog.go:90] GET /api/v1/namespaces/default: (1.464522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:18.004947  110801 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.268943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:18.006669  110801 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.206925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:18.090952  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:18.091687  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:18.092753  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:18.092820  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:18.093316  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:18.093992  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:18.095498  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:19.091122  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:19.091771  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:19.093058  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:19.093084  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:19.093473  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:19.094155  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:19.095644  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:20.091225  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:20.091932  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:20.093204  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:20.093239  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:20.093611  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:20.094389  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:20.095805  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:32:20.806982  110801 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39269/apis/events.k8s.io/v1beta1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/events: dial tcp 127.0.0.1:39269: connect: connection refused' (may retry after sleeping)
I0814 11:32:21.091378  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:21.092336  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:21.093371  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:21.093451  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:21.093827  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:21.095581  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:21.095938  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:32:21.900408  110801 factory.go:599] Error getting pod permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/signalling-pod for retry: Get http://127.0.0.1:39269/api/v1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/pods/signalling-pod: dial tcp 127.0.0.1:39269: connect: connection refused; retrying...
I0814 11:32:22.091782  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:22.092652  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:22.093654  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:22.093722  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:22.093922  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:22.095706  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:22.096066  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:23.092230  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:23.092833  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:23.093757  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:23.093871  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:23.094080  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:23.095861  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:23.096205  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:24.092389  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:24.092997  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:24.093897  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:24.094086  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:24.094203  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:24.095996  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:24.096338  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:25.092507  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:25.093221  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:25.094082  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:25.094227  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:25.094354  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:25.096191  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:25.096589  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:26.092687  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:26.093376  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:26.094221  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:26.094353  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:26.095441  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:26.096356  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:26.096756  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:27.092827  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:27.093551  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:27.094388  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:27.094546  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:27.095608  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:27.096520  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:27.096880  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:28.003408  110801 httplog.go:90] GET /api/v1/namespaces/default: (1.432596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:28.005268  110801 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.379809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:28.006989  110801 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.333565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:28.093032  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:28.094607  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:28.095127  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:28.095131  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:28.096139  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:28.096761  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:28.097013  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:29.093230  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:29.094742  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:29.095238  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:29.095722  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:29.096578  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:29.096915  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:29.097149  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:30.093380  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:30.094923  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:30.095396  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:30.095903  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:30.097238  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:30.097307  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:30.097326  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:32:30.905163  110801 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39269/apis/events.k8s.io/v1beta1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/events: dial tcp 127.0.0.1:39269: connect: connection refused' (may retry after sleeping)
I0814 11:32:31.093584  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:31.095081  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:31.095560  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:31.096050  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:31.097325  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:31.097447  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:31.097478  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:32.093764  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:32.095259  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:32.095740  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:32.096204  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:32.097495  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:32.097573  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:32.097589  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:33.093933  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:33.095516  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:33.095975  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:33.096342  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:33.097595  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:33.097767  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:33.097776  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:34.094114  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:34.095696  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:34.096177  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:34.096483  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:34.097755  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:34.097916  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:34.097927  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:32:34.701008  110801 factory.go:599] Error getting pod permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/signalling-pod for retry: Get http://127.0.0.1:39269/api/v1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/pods/signalling-pod: dial tcp 127.0.0.1:39269: connect: connection refused; retrying...
I0814 11:32:35.094292  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:35.095844  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:35.096329  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:35.096607  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:35.097853  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:35.098028  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:35.098053  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:36.094517  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:36.096037  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:36.096452  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:36.096751  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:36.097999  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:36.098175  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:36.098176  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:37.094721  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:37.096228  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:37.096605  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:37.096894  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:37.098142  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:37.098296  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:37.098305  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:38.004314  110801 httplog.go:90] GET /api/v1/namespaces/default: (2.170107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:38.007805  110801 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (3.067793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:38.009095  110801 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (934.371µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:38.094940  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:38.096385  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:38.096763  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:38.097045  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:38.098287  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:38.098429  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:38.098435  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:39.095133  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:39.096552  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:39.096940  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:39.097157  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:39.098452  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:39.098565  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:39.098586  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:39.295205  110801 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods: (2.180041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:39.295582  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:39.295678  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:39.295852  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:39.295944  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:39.297449  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.20826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53834]
I0814 11:32:39.297982  110801 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod/status: (1.779003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:39.298657  110801 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/events: (1.428169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:39.299704  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.147233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0814 11:32:39.299999  110801 generic_scheduler.go:1191] Node test-node-0 is a potential node for preemption.
I0814 11:32:39.302254  110801 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod/status: (1.825665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:39.305492  110801 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/waiting-pod: (2.786461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:39.308402  110801 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/events: (1.446921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:39.398249  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.302719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:39.497907  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.825777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:39.597760  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.893546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:39.698171  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.144116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:39.797792  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.84472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:39.897638  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.717795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:39.998060  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.974945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:40.095363  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:40.096739  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:40.097073  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:40.097161  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:40.097175  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:40.097285  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:40.097313  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:40.097448  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:40.098581  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.606122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:40.098675  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:40.099023  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:40.099044  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:40.100221  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.213194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53834]
I0814 11:32:40.100221  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.369021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:40.100881  110801 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/events: (2.801159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58342]
I0814 11:32:40.197594  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.681253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:40.297562  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.610131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:40.397648  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.710291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:40.497687  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.783377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:40.597954  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.037267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:40.697643  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.66564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:40.797857  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.86898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:40.897582  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.670679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:40.997850  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.82802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:41.091074  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:41.091106  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:41.091406  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:41.091498  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:41.093854  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.955036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:41.095252  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.249532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58344]
I0814 11:32:41.095513  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:41.095974  110801 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/events/preemptor-pod.15bac6c2b92cbebd: (3.152255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:41.096874  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.196651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58344]
I0814 11:32:41.096889  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:41.097226  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:41.097339  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:41.097359  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:41.097410  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:41.097468  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:41.097521  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:41.098800  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:41.099145  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:41.099173  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:41.099376  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.393499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:41.099970  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.964288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:41.197742  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.769192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:41.297597  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.661115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:41.397942  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.989569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:41.497727  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.777416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:41.598015  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.975712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:41.697907  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.950367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:41.797585  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.635585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:41.897425  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.5254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:41.997490  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.623499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:42.095873  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:42.097094  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:42.097361  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:42.097395  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.529654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:42.097499  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:42.097510  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:42.097550  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:42.097909  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:42.097958  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:42.098942  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:42.099265  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:42.099275  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:42.100022  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.418084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:42.100352  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.651417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
E0814 11:32:42.172274  110801 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39269/apis/events.k8s.io/v1beta1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/events: dial tcp 127.0.0.1:39269: connect: connection refused' (may retry after sleeping)
I0814 11:32:42.198131  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.18495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:42.301714  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (5.668214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:42.397841  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.854143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:42.497735  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.800111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:42.597941  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.915084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:42.698065  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.052795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:42.797940  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.952125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:42.897825  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.893056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:42.997767  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.798561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:43.096060  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:43.097272  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:43.097593  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:43.097687  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:43.097727  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:43.097743  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:43.097952  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:43.098000  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:43.098361  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.409417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:43.099093  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:43.099402  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:43.099415  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:43.100234  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.361528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:43.100333  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.1494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58348]
I0814 11:32:43.197740  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.82418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:43.297748  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.868414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:43.398839  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.867682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:43.497732  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.699763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:43.601411  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.312492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:43.700108  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (4.158783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:43.797637  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.713898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:43.897712  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.794586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:43.997667  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.685611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:44.096497  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:44.097450  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:44.097741  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:44.097820  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.847772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:44.097821  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:44.097875  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:44.097893  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:44.098117  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:44.098199  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:44.099553  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:44.099903  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.330928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:44.100036  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.619335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:44.100114  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:44.100138  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:44.197881  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.933223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:44.298136  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.020772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:44.398046  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.024369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:44.497679  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.689837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:44.597980  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.018537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:44.697800  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.794488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:44.798012  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.022989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:44.898101  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.998375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:44.997987  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.91242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:45.096803  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:45.097606  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:45.097887  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.896031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:45.097977  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:45.098023  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:45.098115  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:45.098129  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:45.098294  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:45.098353  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:45.099718  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:45.100297  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:45.100303  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:45.100503  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.912451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:45.101163  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.346759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:45.198084  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.031005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:45.297505  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.605182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:45.397703  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.81441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:45.497994  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.892901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:45.597399  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.521239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:45.697244  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.392697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:45.797503  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.580557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:45.897678  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.775062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:45.997499  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.629135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:46.096967  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:46.097771  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:46.097822  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.87042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:46.098124  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:46.098206  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:46.098316  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:46.098330  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:46.098428  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:46.098469  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:46.100046  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.210617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:46.100107  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.385723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:46.100148  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:46.100402  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:46.100473  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:46.197636  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.758883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:46.297503  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.612844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:46.397677  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.776884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:46.497509  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.556163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:46.597645  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.738206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:46.697485  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.609919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:46.797242  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.339028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:46.897218  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.329673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:46.997371  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.464053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:47.097101  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:47.097585  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.705009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:47.097930  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:47.098255  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:47.098357  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:47.098492  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:47.098511  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:47.098671  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:47.098724  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:47.100521  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.610493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:47.100556  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.476695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:47.100598  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:47.100616  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:47.100614  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:47.197616  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.702998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:47.297807  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.916304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:47.397827  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.817161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:47.497611  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.719294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:47.597777  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.845636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:47.697812  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.838646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:47.800638  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.857857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:47.898482  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.594666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:47.999008  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.986631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.005224  110801 httplog.go:90] GET /api/v1/namespaces/default: (2.050735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.006732  110801 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.194714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.009353  110801 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.291568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.097312  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.427944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.097412  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:48.098056  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:48.098402  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:48.098472  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:48.098607  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:48.098620  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:48.098735  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:48.098794  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:48.100516  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (870.957µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.100516  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.328275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:48.101057  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:48.101081  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:48.101152  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:48.197512  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.6542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.297741  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.843773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.397698  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.736441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.497602  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.7189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.597832  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.909414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.697793  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.858268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.797806  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.849976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.897985  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.069561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:48.997932  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.87876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:49.097469  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.462509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:49.097813  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:49.098283  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:49.098570  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:49.098650  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:49.098704  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:49.098718  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:49.098824  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:49.098860  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:49.100098  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.083197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:49.100369  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.251481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:49.101206  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:49.101210  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:49.101345  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:49.197520  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.506581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:49.297600  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.692723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:49.397885  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.917431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:49.497642  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.707721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:49.597572  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.576248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:49.697682  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.741738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:49.797677  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.649973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:49.897572  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.619726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:49.997390  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.516802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:50.097495  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.601139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:50.097985  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:50.098451  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:50.098707  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:50.098843  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:50.098856  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:50.098974  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:50.099023  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:50.099234  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:50.101456  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:50.101485  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:50.101500  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:50.102580  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.878693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58656]
I0814 11:32:50.102929  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.928409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:50.197507  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.575774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:50.302454  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.711122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:50.397415  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.510345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:50.497499  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.641177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:50.597372  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.466186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:50.697660  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.71593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:50.797805  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.869119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:50.897486  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.605429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:50.997398  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.530374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:51.097941  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.96219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:51.098183  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:51.098673  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:51.098914  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:51.099093  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:51.099185  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:51.099371  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:51.099448  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:51.099511  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:51.101276  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.439246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:51.101399  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.410054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:51.101566  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:51.101583  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:51.101588  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:51.197439  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.56683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:51.297234  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.380235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:51.398022  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.179644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:51.497365  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.452817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:51.597414  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.495982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:51.697667  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.676563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:51.797793  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.827652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:51.898032  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.076761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:51.997502  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.625636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:52.097724  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.780745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:52.098288  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:52.098933  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:52.099078  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:52.099176  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:52.099202  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:52.099341  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:52.099397  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:52.099688  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:52.101415  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.653419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:52.101602  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.792682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:52.101882  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:52.102030  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:52.102052  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:52.197549  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.559646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:52.297906  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.961451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:52.397856  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.921861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:52.498127  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.611035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:52.597808  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.884479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:52.697649  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.725017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:52.797419  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.447744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:52.898779  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.730056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:52.997401  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.514691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:53.097882  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.73415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:53.098401  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:53.099083  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:53.099213  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:53.099339  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:53.099356  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:53.099483  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:53.099543  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:53.099849  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:53.100971  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.034384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:53.100975  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.213523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:53.102120  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:53.102164  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:53.102187  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:53.197339  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.485523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:53.298592  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.681924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:53.397694  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.726036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:53.497453  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.53244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:53.597322  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.365828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:53.697849  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.959759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:53.798254  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.272428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:53.897669  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.708203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:53.997358  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.406153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:54.097341  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.510498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:54.098575  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:54.099234  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:54.099341  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:54.099519  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:54.099557  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:54.099670  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:54.099723  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:54.099988  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:54.101872  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.701097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:54.102570  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.903371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:54.102637  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:54.102638  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:54.102680  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:54.197360  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.380742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:54.297443  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.605927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:54.397393  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.462052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
E0814 11:32:54.454148  110801 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39269/apis/events.k8s.io/v1beta1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/events: dial tcp 127.0.0.1:39269: connect: connection refused' (may retry after sleeping)
I0814 11:32:54.497703  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.764083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:54.597289  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.531711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:54.697667  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.790973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:54.797298  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.373823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:54.897483  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.604156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:54.997668  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.787321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:55.097595  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.697561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:55.098716  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:55.099374  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:55.099495  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:55.099626  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:55.099638  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:55.099734  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:55.099769  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:55.100616  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:55.101490  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.492656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:55.101505  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.378029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:55.102749  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:55.102789  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:55.102817  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:55.197567  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.642849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:55.297590  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.603708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:55.397376  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.514157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:55.497566  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.678963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:55.597497  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.554894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:55.697440  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.512295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:55.797474  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.554107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:55.897403  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.57379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:55.997120  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.279411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:56.097209  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.406376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:56.098837  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:56.099544  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:56.099629  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:56.099727  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:56.099749  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:56.099912  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:56.099946  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:56.100744  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:56.101418  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.289161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:56.101698  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.411746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:56.102890  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:56.102892  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:56.102942  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:56.197509  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.516042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:56.298476  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.600455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:56.397367  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.532268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:56.497350  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.468848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:56.597650  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.730014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:56.697489  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.574264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:56.797697  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.775959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:56.897312  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.447047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:56.997457  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.536535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:57.097982  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.648252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:57.099250  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:57.099705  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:57.099868  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:57.099959  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:57.099977  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:57.100114  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:57.100158  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:57.100885  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:57.102059  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.653013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:57.102372  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.803109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:57.103050  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:57.103052  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:57.103131  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:57.197417  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.56764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:57.297280  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.376353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:57.397465  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.572109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:57.497275  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.370353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:57.597474  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.558373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:57.697618  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.648305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:57.797390  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.50009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:57.897462  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.498717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:57.997794  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.916986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.004992  110801 httplog.go:90] GET /api/v1/namespaces/default: (1.711359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.006860  110801 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.371026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.008449  110801 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.172188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.097312  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.383545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.099381  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:58.099888  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:58.100006  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:58.100034  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:58.100047  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:58.100191  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:58.100230  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:58.101114  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:58.102363  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.762068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:58.102673  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.235236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.103403  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:58.103423  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:58.103513  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:58.197474  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.566365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.297200  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.226666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.397643  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.766463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.498045  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.078844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.597388  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.521752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.697817  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.750983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.797634  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.692712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.897580  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.701626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:58.997637  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.72209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:59.097731  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.776896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:59.099463  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:59.100043  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:59.100210  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:59.100233  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:32:59.100383  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:32:59.100431  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:32:59.101172  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:59.101190  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:59.102118  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.440709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:32:59.102893  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.121546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:59.103589  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:59.103598  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:59.103649  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:32:59.198117  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.235755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:59.297438  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.524357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:59.397232  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.380904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:59.497390  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.552198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:59.597408  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.541988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:59.697415  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.498633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:59.797364  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.400185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:59.897310  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.42144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:32:59.997660  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.757366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:00.097454  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.546093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:00.099636  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:00.100223  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:00.100353  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:00.100372  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:00.100512  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:33:00.100711  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:33:00.101324  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:00.101474  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:00.102631  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.578562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:00.102651  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.622878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:00.103733  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:00.103770  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:00.103771  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:00.197924  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.731602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:00.298251  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.355704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
E0814 11:33:00.305307  110801 factory.go:599] Error getting pod permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/signalling-pod for retry: Get http://127.0.0.1:39269/api/v1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/pods/signalling-pod: dial tcp 127.0.0.1:39269: connect: connection refused; retrying...
I0814 11:33:00.397472  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.574647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:00.497051  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.231525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:00.597371  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.539257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:00.697189  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.366942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:00.797461  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.56583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:00.897244  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.419977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:00.997622  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.65303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:01.097517  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.63523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:01.099770  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:01.100384  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:01.100607  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:01.100623  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:01.100754  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:33:01.100794  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:33:01.101486  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:01.101637  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:01.102843  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.805473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:01.103140  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.915109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:01.103851  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:01.103873  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:01.103906  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:01.197917  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.003212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:01.297947  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.013717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:01.397739  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.663303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:01.497356  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.462309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:01.597583  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.59057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:01.697483  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.584308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:01.797575  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.519874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:01.897711  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.778255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:01.997885  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.905172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:02.097836  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.91428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:02.099912  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:02.100603  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:02.100842  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:02.100877  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:02.101077  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:33:02.101156  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:33:02.101614  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:02.101851  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:02.103091  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.693983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:02.103123  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.706056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:02.103990  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:02.104004  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:02.104024  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:02.198337  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.397655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:02.298903  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.458839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:02.398211  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.898965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:02.498402  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.151023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:02.598723  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.573539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:02.698304  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.145093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:02.798261  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.189209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:02.898387  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.283163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:02.998399  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.555981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:03.097890  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.839957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:03.100143  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:03.100854  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:03.101004  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:03.101016  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:03.101161  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:33:03.101235  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:33:03.102145  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:03.102268  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:03.104168  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.544759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:03.104378  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.76791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:03.104639  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:03.104672  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:03.104716  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:03.198218  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.060622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:03.297859  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.851425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:03.398219  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.268888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:03.498289  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.200121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:03.597625  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.604181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:03.697641  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.700261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:03.798073  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.855131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:03.898151  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.087345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:03.998282  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.27982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:04.097493  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.543755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:04.100283  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:04.101163  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:04.101366  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:04.101390  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:04.101545  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:33:04.101588  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:33:04.102333  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:04.102622  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:04.103958  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.96945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:04.104246  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.389169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:04.104714  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:04.104832  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:04.104848  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:04.197893  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.989451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:04.297464  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.370136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:04.398204  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.257119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:04.498062  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.087614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
E0814 11:33:04.580127  110801 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39269/apis/events.k8s.io/v1beta1/namespaces/permit-plugin2cfa1d76-6793-4be9-ae65-2b032f32d96a/events: dial tcp 127.0.0.1:39269: connect: connection refused' (may retry after sleeping)
I0814 11:33:04.598145  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.051267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:04.697895  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.893779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:04.704044  110801 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.434751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:04.705986  110801 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.30212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:04.707444  110801 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.076639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:04.797906  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.908632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:04.897393  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.524254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:04.997686  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.747373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:05.097698  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.721847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:05.100442  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:05.101342  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:05.101479  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:05.101490  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:05.101657  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:33:05.101697  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:33:05.102490  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:05.102761  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:05.103467  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.534663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:05.105199  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (3.180343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:05.105681  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:05.105708  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:05.105720  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:05.197727  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.651486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:05.297520  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.597683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:05.398506  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.589187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:05.497485  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.459192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:05.597438  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.584632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:05.697408  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.464733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:05.797679  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.712672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:05.897952  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.07983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:05.997754  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.810087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:06.097373  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.508612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:06.100569  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:06.101509  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:06.101720  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:06.101748  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:06.101885  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:33:06.101921  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:33:06.102810  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:06.102907  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:06.104042  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.704119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:06.104171  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.786094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:06.105822  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:06.105848  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:06.105854  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:06.198059  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.044738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:06.301250  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.990748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:06.397394  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.529703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:06.497550  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.571646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:06.598056  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.078853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:06.697677  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.623345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:06.797428  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.603431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:06.897995  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.017152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:06.998109  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.062283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:07.094660  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:07.094696  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:07.094900  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:33:07.094945  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:33:07.097989  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.871197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
I0814 11:33:07.098273  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (3.069326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58328]
I0814 11:33:07.098343  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (3.128816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:07.100780  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:07.101717  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:07.102935  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:07.103066  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:07.106027  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:07.106059  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:07.106068  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:07.197817  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.926628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:07.297447  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.567353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:07.397391  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.526814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:07.497604  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.672645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:07.597371  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.462385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:07.697547  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.567322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:07.797481  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.432366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:07.897383  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.464047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:07.997320  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.413984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.004364  110801 httplog.go:90] GET /api/v1/namespaces/default: (1.02129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.005718  110801 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.001485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.007106  110801 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (983.223µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.097398  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.521859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.100934  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:08.101892  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:08.101996  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:08.102005  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:08.102116  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:33:08.102152  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:33:08.103229  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:08.103279  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:08.104257  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.740365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
I0814 11:33:08.104257  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.250536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.106190  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:08.106223  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:08.106251  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:08.197655  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.755864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.298203  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.268387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.397844  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.036772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.497868  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.907681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.597790  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.94492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.697907  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.933968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.797788  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.894243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.897814  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.768653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:08.997239  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.412865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:09.097817  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.894994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:09.101097  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:09.102062  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:09.102191  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:09.102204  110801 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:09.102330  110801 factory.go:550] Unable to schedule preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:33:09.102376  110801 factory.go:624] Updating pod condition for preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:33:09.103418  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:09.103686  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:09.105334  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.962386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
I0814 11:33:09.105624  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.77037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
I0814 11:33:09.106437  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:09.106453  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:09.106465  110801 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:33:09.200472  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (4.612182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
I0814 11:33:09.299721  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (2.75638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
I0814 11:33:09.301648  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.454867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
I0814 11:33:09.303308  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/waiting-pod: (1.117215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
I0814 11:33:09.310994  110801 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/waiting-pod: (7.178514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
I0814 11:33:09.320432  110801 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (8.175353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
I0814 11:33:09.323323  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/waiting-pod: (1.336822ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
I0814 11:33:09.324646  110801 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:09.324683  110801 scheduler.go:473] Skip schedule deleting pod: preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/preemptor-pod
I0814 11:33:09.326810  110801 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/pods/preemptor-pod: (1.857579ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
I0814 11:33:09.327056  110801 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/events: (2.078374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59174]
E0814 11:33:09.327175  110801 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0814 11:33:09.327594  110801 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=29445&timeout=6m45s&timeoutSeconds=405&watch=true: (1m1.235733841s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53838]
I0814 11:33:09.327648  110801 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=29447&timeout=9m2s&timeoutSeconds=542&watch=true: (1m1.235346713s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53820]
I0814 11:33:09.327668  110801 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=29447&timeout=5m38s&timeoutSeconds=338&watch=true: (1m1.235860266s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53836]
I0814 11:33:09.327755  110801 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=29728&timeout=7m26s&timeoutSeconds=446&watch=true: (1m1.235331448s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53824]
I0814 11:33:09.327791  110801 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=29447&timeout=7m37s&timeoutSeconds=457&watch=true: (1m1.237746746s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53482]
I0814 11:33:09.327810  110801 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=29445&timeout=8m48s&timeoutSeconds=528&watch=true: (1m1.236516876s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53832]
I0814 11:33:09.327902  110801 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=29447&timeout=5m54s&timeoutSeconds=354&watch=true: (1m1.236829284s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53828]
I0814 11:33:09.327927  110801 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=29445&timeout=8m45s&timeoutSeconds=525&watch=true: (1m1.237152s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0814 11:33:09.328027  110801 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=29446&timeout=5m32s&timeoutSeconds=332&watch=true: (1m1.235998393s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53818]
I0814 11:33:09.328069  110801 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=29447&timeout=8m23s&timeoutSeconds=503&watch=true: (1m1.236478881s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53504]
I0814 11:33:09.328091  110801 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=29446&timeout=9m29s&timeoutSeconds=569&watch=true: (1m1.236072773s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53816]
I0814 11:33:09.332568  110801 httplog.go:90] DELETE /api/v1/nodes: (4.470192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
I0814 11:33:09.332825  110801 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0814 11:33:09.334717  110801 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.552069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
I0814 11:33:09.338444  110801 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (3.376769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60996]
--- FAIL: TestPreemptWithPermitPlugin (64.78s)
    framework_test.go:1618: Expected the preemptor pod to be scheduled. error: timed out waiting for the condition
    framework_test.go:1622: Expected the waiting pod to get preempted and deleted

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-112505.xml

Find preempt-with-permit-plugin288d9cff-6f22-4da2-a88c-d61f5ed80a1f/waiting-pod mentions in log files | View test history on testgrid


Show 2470 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 695 lines ...
W0814 11:20:00.512] W0814 11:20:00.454961   53204 controllermanager.go:555] "serviceaccount-token" is disabled because there is no private key
W0814 11:20:00.513] I0814 11:20:00.455379   53204 controllermanager.go:535] Started "serviceaccount"
W0814 11:20:00.513] I0814 11:20:00.455703   53204 controllermanager.go:535] Started "deployment"
W0814 11:20:00.513] I0814 11:20:00.456047   53204 controllermanager.go:535] Started "replicaset"
W0814 11:20:00.513] I0814 11:20:00.456488   53204 controllermanager.go:535] Started "horizontalpodautoscaling"
W0814 11:20:00.513] I0814 11:20:00.456808   53204 controllermanager.go:535] Started "cronjob"
W0814 11:20:00.514] E0814 11:20:00.457198   53204 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0814 11:20:00.514] W0814 11:20:00.457225   53204 controllermanager.go:527] Skipping "service"
W0814 11:20:00.514] I0814 11:20:00.457670   53204 serviceaccounts_controller.go:117] Starting service account controller
W0814 11:20:00.514] I0814 11:20:00.457701   53204 controller_utils.go:1029] Waiting for caches to sync for service account controller
W0814 11:20:00.514] I0814 11:20:00.457723   53204 deployment_controller.go:152] Starting deployment controller
W0814 11:20:00.514] I0814 11:20:00.457736   53204 controller_utils.go:1029] Waiting for caches to sync for deployment controller
W0814 11:20:00.515] I0814 11:20:00.457789   53204 replica_set.go:182] Starting replicaset controller
... skipping 65 lines ...
W0814 11:20:00.931] I0814 11:20:00.930743   53204 controller_utils.go:1029] Waiting for caches to sync for disruption controller
W0814 11:20:00.931] W0814 11:20:00.930770   53204 controllermanager.go:527] Skipping "csrsigning"
W0814 11:20:00.932] I0814 11:20:00.931764   53204 controllermanager.go:535] Started "ttl"
W0814 11:20:00.932] I0814 11:20:00.931888   53204 ttl_controller.go:116] Starting TTL controller
W0814 11:20:00.932] I0814 11:20:00.931916   53204 controller_utils.go:1029] Waiting for caches to sync for TTL controller
W0814 11:20:00.933] I0814 11:20:00.932057   53204 node_lifecycle_controller.go:77] Sending events to api server
W0814 11:20:00.933] E0814 11:20:00.932095   53204 core.go:175] failed to start cloud node lifecycle controller: no cloud provider provided
W0814 11:20:00.933] W0814 11:20:00.932107   53204 controllermanager.go:527] Skipping "cloud-node-lifecycle"
W0814 11:20:00.933] W0814 11:20:00.932416   53204 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0814 11:20:00.933] I0814 11:20:00.933097   53204 controllermanager.go:535] Started "attachdetach"
W0814 11:20:00.934] I0814 11:20:00.933570   53204 controllermanager.go:535] Started "persistentvolume-expander"
W0814 11:20:00.934] I0814 11:20:00.933916   53204 controllermanager.go:535] Started "replicationcontroller"
W0814 11:20:00.934] I0814 11:20:00.934276   53204 controllermanager.go:535] Started "daemonset"
... skipping 22 lines ...
W0814 11:20:00.941] I0814 11:20:00.940844   53204 controllermanager.go:535] Started "csrcleaner"
W0814 11:20:00.941] W0814 11:20:00.940907   53204 controllermanager.go:514] "bootstrapsigner" is disabled
W0814 11:20:00.941] I0814 11:20:00.940882   53204 cleaner.go:81] Starting CSR cleaner controller
W0814 11:20:00.942] I0814 11:20:00.941703   53204 controllermanager.go:535] Started "podgc"
W0814 11:20:00.942] I0814 11:20:00.941924   53204 gc_controller.go:76] Starting GC controller
W0814 11:20:00.942] I0814 11:20:00.942093   53204 controller_utils.go:1029] Waiting for caches to sync for GC controller
W0814 11:20:00.984] W0814 11:20:00.984055   53204 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0814 11:20:01.013] I0814 11:20:01.012101   53204 controller_utils.go:1036] Caches are synced for taint controller
W0814 11:20:01.013] I0814 11:20:01.012240   53204 node_lifecycle_controller.go:1189] Initializing eviction metric for zone: 
W0814 11:20:01.014] I0814 11:20:01.012315   53204 taint_manager.go:186] Starting NoExecuteTaintManager
W0814 11:20:01.014] I0814 11:20:01.012323   53204 node_lifecycle_controller.go:1039] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0814 11:20:01.014] I0814 11:20:01.012467   53204 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"3a963a84-2c1a-4dd8-addd-fed8cca4f113", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W0814 11:20:01.029] I0814 11:20:01.029008   53204 controller_utils.go:1036] Caches are synced for job controller
... skipping 30 lines ...
I0814 11:20:01.257]   "buildDate": "2019-08-14T11:18:20Z",
I0814 11:20:01.257]   "goVersion": "go1.12.1",
I0814 11:20:01.257]   "compiler": "gc",
I0814 11:20:01.257]   "platform": "linux/amd64"
I0814 11:20:01.415] }+++ [0814 11:20:01] Testing kubectl version: check client only output matches expected output
W0814 11:20:01.516] I0814 11:20:01.312097   53204 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0814 11:20:01.517] E0814 11:20:01.331337   53204 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0814 11:20:01.517] E0814 11:20:01.342701   53204 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0814 11:20:01.517] I0814 11:20:01.358138   53204 controller_utils.go:1036] Caches are synced for HPA controller
W0814 11:20:01.517] I0814 11:20:01.512043   53204 controller_utils.go:1036] Caches are synced for certificate controller
W0814 11:20:01.518] I0814 11:20:01.512093   53204 controller_utils.go:1036] Caches are synced for resource quota controller
W0814 11:20:01.541] I0814 11:20:01.540520   53204 controller_utils.go:1036] Caches are synced for endpoint controller
W0814 11:20:01.621] I0814 11:20:01.621098   53204 controller_utils.go:1036] Caches are synced for namespace controller
W0814 11:20:01.628] I0814 11:20:01.627565   53204 controller_utils.go:1036] Caches are synced for garbage collector controller
... skipping 64 lines ...
I0814 11:20:04.798] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:20:04.800] +++ command: run_RESTMapper_evaluation_tests
I0814 11:20:04.811] +++ [0814 11:20:04] Creating namespace namespace-1565781604-6479
I0814 11:20:04.887] namespace/namespace-1565781604-6479 created
I0814 11:20:04.959] Context "test" modified.
I0814 11:20:04.965] +++ [0814 11:20:04] Testing RESTMapper
I0814 11:20:05.072] +++ [0814 11:20:05] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0814 11:20:05.085] +++ exit code: 0
I0814 11:20:05.198] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0814 11:20:05.198] bindings                                                                      true         Binding
I0814 11:20:05.198] componentstatuses                 cs                                          false        ComponentStatus
I0814 11:20:05.198] configmaps                        cm                                          true         ConfigMap
I0814 11:20:05.199] endpoints                         ep                                          true         Endpoints
... skipping 664 lines ...
I0814 11:20:24.618] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0814 11:20:24.709] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0814 11:20:24.782] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0814 11:20:24.877] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0814 11:20:25.028] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:20:25.210] (Bpod/env-test-pod created
W0814 11:20:25.311] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0814 11:20:25.311] error: setting 'all' parameter but found a non empty selector. 
W0814 11:20:25.312] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 11:20:25.312] I0814 11:20:24.292160   49763 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0814 11:20:25.312] error: min-available and max-unavailable cannot be both specified
I0814 11:20:25.413] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0814 11:20:25.413] Name:         env-test-pod
I0814 11:20:25.414] Namespace:    test-kubectl-describe-pod
I0814 11:20:25.414] Priority:     0
I0814 11:20:25.414] Node:         <none>
I0814 11:20:25.414] Labels:       <none>
... skipping 173 lines ...
I0814 11:20:38.945] (Bpod/valid-pod patched
I0814 11:20:39.040] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0814 11:20:39.125] (Bpod/valid-pod patched
I0814 11:20:39.222] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0814 11:20:39.397] (Bpod/valid-pod patched
I0814 11:20:39.502] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 11:20:39.686] (B+++ [0814 11:20:39] "kubectl patch with resourceVersion 496" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0814 11:20:39.924] pod "valid-pod" deleted
I0814 11:20:39.937] pod/valid-pod replaced
I0814 11:20:40.047] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0814 11:20:40.215] (BSuccessful
I0814 11:20:40.216] message:error: --grace-period must have --force specified
I0814 11:20:40.216] has:\-\-grace-period must have \-\-force specified
I0814 11:20:40.390] Successful
I0814 11:20:40.390] message:error: --timeout must have --force specified
I0814 11:20:40.391] has:\-\-timeout must have \-\-force specified
I0814 11:20:40.552] node/node-v1-test created
W0814 11:20:40.653] W0814 11:20:40.552933   53204 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0814 11:20:40.754] node/node-v1-test replaced
I0814 11:20:40.839] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0814 11:20:40.925] (Bnode "node-v1-test" deleted
I0814 11:20:41.028] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 11:20:41.309] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0814 11:20:42.312] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 66 lines ...
I0814 11:20:46.362] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:20:46.518] (Bpod/test-pod created
W0814 11:20:46.619] Edit cancelled, no changes made.
W0814 11:20:46.620] Edit cancelled, no changes made.
W0814 11:20:46.620] Edit cancelled, no changes made.
W0814 11:20:46.620] Edit cancelled, no changes made.
W0814 11:20:46.620] error: 'name' already has a value (valid-pod), and --overwrite is false
W0814 11:20:46.620] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 11:20:46.621] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 11:20:46.721] pod "test-pod" deleted
I0814 11:20:46.721] +++ [0814 11:20:46] Creating namespace namespace-1565781646-14318
I0814 11:20:46.772] namespace/namespace-1565781646-14318 created
I0814 11:20:46.843] Context "test" modified.
... skipping 41 lines ...
I0814 11:20:50.012] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0814 11:20:50.015] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:20:50.017] +++ command: run_kubectl_create_error_tests
I0814 11:20:50.028] +++ [0814 11:20:50] Creating namespace namespace-1565781650-9884
I0814 11:20:50.104] namespace/namespace-1565781650-9884 created
I0814 11:20:50.178] Context "test" modified.
I0814 11:20:50.184] +++ [0814 11:20:50] Testing kubectl create with error
W0814 11:20:50.284] Error: must specify one of -f and -k
W0814 11:20:50.285] 
W0814 11:20:50.285] Create a resource from a file or from stdin.
W0814 11:20:50.285] 
W0814 11:20:50.285]  JSON and YAML formats are accepted.
W0814 11:20:50.285] 
W0814 11:20:50.285] Examples:
... skipping 41 lines ...
W0814 11:20:50.290] 
W0814 11:20:50.290] Usage:
W0814 11:20:50.290]   kubectl create -f FILENAME [options]
W0814 11:20:50.291] 
W0814 11:20:50.291] Use "kubectl <command> --help" for more information about a given command.
W0814 11:20:50.291] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0814 11:20:50.424] +++ [0814 11:20:50] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0814 11:20:50.524] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 11:20:50.525] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 11:20:50.628] +++ exit code: 0
I0814 11:20:50.629] Recording: run_kubectl_apply_tests
I0814 11:20:50.629] Running command: run_kubectl_apply_tests
I0814 11:20:50.647] 
... skipping 19 lines ...
W0814 11:20:52.702] I0814 11:20:52.702058   49763 client.go:354] parsed scheme: ""
W0814 11:20:52.703] I0814 11:20:52.702098   49763 client.go:354] scheme "" not registered, fallback to default scheme
W0814 11:20:52.703] I0814 11:20:52.702201   49763 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0814 11:20:52.703] I0814 11:20:52.702321   49763 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 11:20:52.703] I0814 11:20:52.703033   49763 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 11:20:52.707] I0814 11:20:52.706832   49763 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0814 11:20:52.795] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0814 11:20:52.895] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0814 11:20:52.896] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 11:20:52.908] +++ exit code: 0
I0814 11:20:52.942] Recording: run_kubectl_run_tests
I0814 11:20:52.942] Running command: run_kubectl_run_tests
I0814 11:20:52.962] 
... skipping 84 lines ...
I0814 11:20:55.326] Context "test" modified.
I0814 11:20:55.332] +++ [0814 11:20:55] Testing kubectl create filter
I0814 11:20:55.420] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:20:55.572] (Bpod/selector-test-pod created
I0814 11:20:55.665] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0814 11:20:55.747] (BSuccessful
I0814 11:20:55.748] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0814 11:20:55.748] has:pods "selector-test-pod-dont-apply" not found
I0814 11:20:55.829] pod "selector-test-pod" deleted
I0814 11:20:55.845] +++ exit code: 0
I0814 11:20:55.873] Recording: run_kubectl_apply_deployments_tests
I0814 11:20:55.874] Running command: run_kubectl_apply_deployments_tests
I0814 11:20:55.893] 
... skipping 38 lines ...
I0814 11:20:57.638] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:20:57.726] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:20:57.816] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:20:57.974] (Bdeployment.apps/nginx created
I0814 11:20:58.079] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0814 11:21:02.327] (BSuccessful
I0814 11:21:02.327] message:Error from server (Conflict): error when applying patch:
I0814 11:21:02.328] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565781655-1363\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0814 11:21:02.328] to:
I0814 11:21:02.328] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0814 11:21:02.328] Name: "nginx", Namespace: "namespace-1565781655-1363"
I0814 11:21:02.331] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565781655-1363\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-08-14T11:20:57Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]] "k:{\"type\":\"Progressing\"}":map[".":map[] "f:lastTransitionTime":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:unavailableReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-14T11:20:57Z"] map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map["f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-08-14T11:20:57Z"] map["apiVersion":"apps/v1" "fields":map["f:status":map["f:conditions":map["k:{\"type\":\"Progressing\"}":map["f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[]]] "f:replicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-14T11:20:58Z"]] "name":"nginx" "namespace":"namespace-1565781655-1363" "resourceVersion":"588" "selfLink":"/apis/apps/v1/namespaces/namespace-1565781655-1363/deployments/nginx" "uid":"d7cf3ff3-e604-4caf-8d8a-df637f8b141a"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-08-14T11:20:57Z" "lastUpdateTime":"2019-08-14T11:20:57Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-08-14T11:20:57Z" "lastUpdateTime":"2019-08-14T11:20:58Z" "message":"ReplicaSet \"nginx-7dbc4d9f\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0814 11:21:02.332] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0814 11:21:02.332] has:Error from server (Conflict)
W0814 11:21:02.432] I0814 11:20:57.980773   53204 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565781655-1363", Name:"nginx", UID:"d7cf3ff3-e604-4caf-8d8a-df637f8b141a", APIVersion:"apps/v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7dbc4d9f to 3
W0814 11:21:02.433] I0814 11:20:57.984787   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781655-1363", Name:"nginx-7dbc4d9f", UID:"a4daca03-1f22-4eac-8b53-6e76957d6b45", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-qjgsb
W0814 11:21:02.433] I0814 11:20:57.991157   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781655-1363", Name:"nginx-7dbc4d9f", UID:"a4daca03-1f22-4eac-8b53-6e76957d6b45", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-bhm6w
W0814 11:21:02.434] I0814 11:20:57.991607   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781655-1363", Name:"nginx-7dbc4d9f", UID:"a4daca03-1f22-4eac-8b53-6e76957d6b45", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-l5hb7
W0814 11:21:04.428] I0814 11:21:04.427807   53204 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1565781647-30072
W0814 11:21:06.800] E0814 11:21:06.799832   53204 replica_set.go:450] Sync "namespace-1565781655-1363/nginx-7dbc4d9f" failed with replicasets.apps "nginx-7dbc4d9f" not found
I0814 11:21:07.578] deployment.apps/nginx configured
I0814 11:21:07.676] Successful
I0814 11:21:07.677] message:        "name": "nginx2"
I0814 11:21:07.677]           "name": "nginx2"
I0814 11:21:07.677] has:"name": "nginx2"
W0814 11:21:07.778] I0814 11:21:07.584918   53204 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565781655-1363", Name:"nginx", UID:"eaae78d8-66af-4c81-8efb-8f934f6c3619", APIVersion:"apps/v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
... skipping 168 lines ...
I0814 11:21:14.815] +++ [0814 11:21:14] Creating namespace namespace-1565781674-975
I0814 11:21:14.891] namespace/namespace-1565781674-975 created
I0814 11:21:14.963] Context "test" modified.
I0814 11:21:14.968] +++ [0814 11:21:14] Testing kubectl get
I0814 11:21:15.056] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:21:15.143] (BSuccessful
I0814 11:21:15.144] message:Error from server (NotFound): pods "abc" not found
I0814 11:21:15.144] has:pods "abc" not found
I0814 11:21:15.232] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:21:15.314] (BSuccessful
I0814 11:21:15.315] message:Error from server (NotFound): pods "abc" not found
I0814 11:21:15.316] has:pods "abc" not found
I0814 11:21:15.404] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:21:15.488] (BSuccessful
I0814 11:21:15.488] message:{
I0814 11:21:15.488]     "apiVersion": "v1",
I0814 11:21:15.488]     "items": [],
... skipping 23 lines ...
I0814 11:21:15.820] has not:No resources found
I0814 11:21:15.908] Successful
I0814 11:21:15.909] message:NAME
I0814 11:21:15.910] has not:No resources found
I0814 11:21:15.995] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:21:16.095] (BSuccessful
I0814 11:21:16.096] message:error: the server doesn't have a resource type "foobar"
I0814 11:21:16.096] has not:No resources found
I0814 11:21:16.179] Successful
I0814 11:21:16.179] message:No resources found in namespace-1565781674-975 namespace.
I0814 11:21:16.179] has:No resources found
I0814 11:21:16.260] Successful
I0814 11:21:16.260] message:
I0814 11:21:16.261] has not:No resources found
I0814 11:21:16.345] Successful
I0814 11:21:16.346] message:No resources found in namespace-1565781674-975 namespace.
I0814 11:21:16.347] has:No resources found
I0814 11:21:16.436] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:21:16.519] (BSuccessful
I0814 11:21:16.520] message:Error from server (NotFound): pods "abc" not found
I0814 11:21:16.520] has:pods "abc" not found
I0814 11:21:16.520] FAIL!
I0814 11:21:16.521] message:Error from server (NotFound): pods "abc" not found
I0814 11:21:16.521] has not:List
I0814 11:21:16.521] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0814 11:21:16.641] Successful
I0814 11:21:16.641] message:I0814 11:21:16.586108   63767 loader.go:375] Config loaded from file:  /tmp/tmp.51qpBwgKUj/.kube/config
I0814 11:21:16.641] I0814 11:21:16.587885   63767 round_trippers.go:471] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0814 11:21:16.642] I0814 11:21:16.612861   63767 round_trippers.go:471] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0814 11:21:22.189] Successful
I0814 11:21:22.189] message:NAME    DATA   AGE
I0814 11:21:22.189] one     0      1s
I0814 11:21:22.189] three   0      0s
I0814 11:21:22.190] two     0      0s
I0814 11:21:22.190] STATUS    REASON          MESSAGE
I0814 11:21:22.190] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 11:21:22.190] has not:watch is only supported on individual resources
I0814 11:21:23.274] Successful
I0814 11:21:23.275] message:STATUS    REASON          MESSAGE
I0814 11:21:23.275] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 11:21:23.275] has not:watch is only supported on individual resources
I0814 11:21:23.280] +++ [0814 11:21:23] Creating namespace namespace-1565781683-8051
I0814 11:21:23.355] namespace/namespace-1565781683-8051 created
I0814 11:21:23.424] Context "test" modified.
I0814 11:21:23.513] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:21:23.670] (Bpod/valid-pod created
... skipping 104 lines ...
I0814 11:21:23.774] }
I0814 11:21:23.849] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:21:24.090] (B<no value>Successful
I0814 11:21:24.091] message:valid-pod:
I0814 11:21:24.091] has:valid-pod:
I0814 11:21:24.173] Successful
I0814 11:21:24.174] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0814 11:21:24.174] 	template was:
I0814 11:21:24.174] 		{.missing}
I0814 11:21:24.174] 	object given to jsonpath engine was:
I0814 11:21:24.176] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-08-14T11:21:23Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-08-14T11:21:23Z"}}, "name":"valid-pod", "namespace":"namespace-1565781683-8051", "resourceVersion":"691", "selfLink":"/api/v1/namespaces/namespace-1565781683-8051/pods/valid-pod", "uid":"599eb975-d769-44db-95a9-0544b9a55416"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0814 11:21:24.176] has:missing is not found
I0814 11:21:24.256] Successful
I0814 11:21:24.257] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0814 11:21:24.257] 	template was:
I0814 11:21:24.258] 		{{.missing}}
I0814 11:21:24.258] 	raw data was:
I0814 11:21:24.259] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-08-14T11:21:23Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-08-14T11:21:23Z"}],"name":"valid-pod","namespace":"namespace-1565781683-8051","resourceVersion":"691","selfLink":"/api/v1/namespaces/namespace-1565781683-8051/pods/valid-pod","uid":"599eb975-d769-44db-95a9-0544b9a55416"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0814 11:21:24.259] 	object given to template engine was:
I0814 11:21:24.261] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-08-14T11:21:23Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-08-14T11:21:23Z]] name:valid-pod namespace:namespace-1565781683-8051 resourceVersion:691 selfLink:/api/v1/namespaces/namespace-1565781683-8051/pods/valid-pod uid:599eb975-d769-44db-95a9-0544b9a55416] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0814 11:21:24.261] has:map has no entry for key "missing"
W0814 11:21:24.361] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0814 11:21:25.350] Successful
I0814 11:21:25.350] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 11:21:25.350] valid-pod   0/1     Pending   0          1s
I0814 11:21:25.351] STATUS      REASON          MESSAGE
I0814 11:21:25.351] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 11:21:25.351] has:STATUS
I0814 11:21:25.351] Successful
I0814 11:21:25.351] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 11:21:25.352] valid-pod   0/1     Pending   0          1s
I0814 11:21:25.352] STATUS      REASON          MESSAGE
I0814 11:21:25.352] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 11:21:25.352] has:valid-pod
I0814 11:21:26.436] Successful
I0814 11:21:26.436] message:pod/valid-pod
I0814 11:21:26.437] has not:STATUS
I0814 11:21:26.438] Successful
I0814 11:21:26.438] message:pod/valid-pod
... skipping 144 lines ...
I0814 11:21:27.560] status:
I0814 11:21:27.560]   phase: Pending
I0814 11:21:27.560]   qosClass: Guaranteed
I0814 11:21:27.560] ---
I0814 11:21:27.560] has:name: valid-pod
I0814 11:21:27.626] Successful
I0814 11:21:27.627] message:Error from server (NotFound): pods "invalid-pod" not found
I0814 11:21:27.628] has:"invalid-pod" not found
I0814 11:21:27.715] pod "valid-pod" deleted
I0814 11:21:27.819] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:21:27.993] (Bpod/redis-master created
I0814 11:21:27.998] pod/valid-pod created
I0814 11:21:28.096] Successful
... skipping 35 lines ...
I0814 11:21:29.283] +++ command: run_kubectl_exec_pod_tests
I0814 11:21:29.294] +++ [0814 11:21:29] Creating namespace namespace-1565781689-3360
I0814 11:21:29.373] namespace/namespace-1565781689-3360 created
I0814 11:21:29.450] Context "test" modified.
I0814 11:21:29.457] +++ [0814 11:21:29] Testing kubectl exec POD COMMAND
I0814 11:21:29.547] Successful
I0814 11:21:29.548] message:Error from server (NotFound): pods "abc" not found
I0814 11:21:29.548] has:pods "abc" not found
I0814 11:21:29.740] pod/test-pod created
I0814 11:21:29.858] Successful
I0814 11:21:29.858] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 11:21:29.858] has not:pods "test-pod" not found
I0814 11:21:29.860] Successful
I0814 11:21:29.860] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 11:21:29.860] has not:pod or type/name must be specified
I0814 11:21:29.949] pod "test-pod" deleted
I0814 11:21:29.968] +++ exit code: 0
I0814 11:21:30.003] Recording: run_kubectl_exec_resource_name_tests
I0814 11:21:30.004] Running command: run_kubectl_exec_resource_name_tests
I0814 11:21:30.027] 
... skipping 2 lines ...
I0814 11:21:30.034] +++ command: run_kubectl_exec_resource_name_tests
I0814 11:21:30.048] +++ [0814 11:21:30] Creating namespace namespace-1565781690-30404
I0814 11:21:30.129] namespace/namespace-1565781690-30404 created
I0814 11:21:30.202] Context "test" modified.
I0814 11:21:30.208] +++ [0814 11:21:30] Testing kubectl exec TYPE/NAME COMMAND
I0814 11:21:30.325] Successful
I0814 11:21:30.326] message:error: the server doesn't have a resource type "foo"
I0814 11:21:30.326] has:error:
I0814 11:21:30.419] Successful
I0814 11:21:30.419] message:Error from server (NotFound): deployments.apps "bar" not found
I0814 11:21:30.419] has:"bar" not found
I0814 11:21:30.587] pod/test-pod created
I0814 11:21:30.750] replicaset.apps/frontend created
W0814 11:21:30.851] I0814 11:21:30.757223   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781690-30404", Name:"frontend", UID:"8b678993-6b2b-40b3-98bc-427d32caee23", APIVersion:"apps/v1", ResourceVersion:"744", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9skwq
W0814 11:21:30.851] I0814 11:21:30.762921   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781690-30404", Name:"frontend", UID:"8b678993-6b2b-40b3-98bc-427d32caee23", APIVersion:"apps/v1", ResourceVersion:"744", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7kr5g
W0814 11:21:30.852] I0814 11:21:30.763776   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781690-30404", Name:"frontend", UID:"8b678993-6b2b-40b3-98bc-427d32caee23", APIVersion:"apps/v1", ResourceVersion:"744", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vwh6g
I0814 11:21:30.952] configmap/test-set-env-config created
I0814 11:21:31.022] Successful
I0814 11:21:31.023] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0814 11:21:31.023] has:not implemented
I0814 11:21:31.122] Successful
I0814 11:21:31.123] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 11:21:31.123] has not:not found
I0814 11:21:31.125] Successful
I0814 11:21:31.125] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 11:21:31.125] has not:pod or type/name must be specified
I0814 11:21:31.236] Successful
I0814 11:21:31.236] message:Error from server (BadRequest): pod frontend-7kr5g does not have a host assigned
I0814 11:21:31.237] has not:not found
I0814 11:21:31.239] Successful
I0814 11:21:31.239] message:Error from server (BadRequest): pod frontend-7kr5g does not have a host assigned
I0814 11:21:31.239] has not:pod or type/name must be specified
I0814 11:21:31.326] pod "test-pod" deleted
I0814 11:21:31.424] replicaset.apps "frontend" deleted
I0814 11:21:31.518] configmap "test-set-env-config" deleted
I0814 11:21:31.538] +++ exit code: 0
I0814 11:21:31.572] Recording: run_create_secret_tests
I0814 11:21:31.573] Running command: run_create_secret_tests
I0814 11:21:31.597] 
I0814 11:21:31.599] +++ Running case: test-cmd.run_create_secret_tests 
I0814 11:21:31.601] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:21:31.603] +++ command: run_create_secret_tests
I0814 11:21:31.706] Successful
I0814 11:21:31.706] message:Error from server (NotFound): secrets "mysecret" not found
I0814 11:21:31.707] has:secrets "mysecret" not found
I0814 11:21:31.883] Successful
I0814 11:21:31.884] message:Error from server (NotFound): secrets "mysecret" not found
I0814 11:21:31.884] has:secrets "mysecret" not found
I0814 11:21:31.885] Successful
I0814 11:21:31.886] message:user-specified
I0814 11:21:31.886] has:user-specified
I0814 11:21:31.959] Successful
I0814 11:21:32.038] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"48529013-4a73-4115-9a84-b01656086263","resourceVersion":"764","creationTimestamp":"2019-08-14T11:21:32Z"}}
... skipping 2 lines ...
I0814 11:21:32.198] has:uid
I0814 11:21:32.279] Successful
I0814 11:21:32.280] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"48529013-4a73-4115-9a84-b01656086263","resourceVersion":"765","creationTimestamp":"2019-08-14T11:21:32Z","managedFields":[{"manager":"kubectl","operation":"Update","apiVersion":"v1","time":"2019-08-14T11:21:32Z","fields":{"f:data":{"f:key1":{},".":{}}}}]},"data":{"key1":"config1"}}
I0814 11:21:32.281] has:config1
I0814 11:21:32.363] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"48529013-4a73-4115-9a84-b01656086263"}}
I0814 11:21:32.455] Successful
I0814 11:21:32.456] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0814 11:21:32.456] has:configmaps "tester-update-cm" not found
I0814 11:21:32.469] +++ exit code: 0
I0814 11:21:32.502] Recording: run_kubectl_create_kustomization_directory_tests
I0814 11:21:32.502] Running command: run_kubectl_create_kustomization_directory_tests
I0814 11:21:32.523] 
I0814 11:21:32.525] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 158 lines ...
W0814 11:21:35.338] I0814 11:21:33.002281   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781690-30404", Name:"test-the-deployment-55cf944b", UID:"44b17b89-10e6-4110-8894-447887aa5e5e", APIVersion:"apps/v1", ResourceVersion:"774", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-55cf944b-2j26q
W0814 11:21:35.339] I0814 11:21:33.002788   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781690-30404", Name:"test-the-deployment-55cf944b", UID:"44b17b89-10e6-4110-8894-447887aa5e5e", APIVersion:"apps/v1", ResourceVersion:"774", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-55cf944b-778x9
I0814 11:21:36.332] Successful
I0814 11:21:36.332] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 11:21:36.332] valid-pod   0/1     Pending   0          1s
I0814 11:21:36.333] STATUS      REASON          MESSAGE
I0814 11:21:36.333] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 11:21:36.333] has:Timeout exceeded while reading body
I0814 11:21:36.418] Successful
I0814 11:21:36.418] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 11:21:36.418] valid-pod   0/1     Pending   0          2s
I0814 11:21:36.418] has:valid-pod
I0814 11:21:36.491] Successful
I0814 11:21:36.491] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0814 11:21:36.492] has:Invalid timeout value
I0814 11:21:36.573] pod "valid-pod" deleted
I0814 11:21:36.590] +++ exit code: 0
I0814 11:21:36.618] Recording: run_crd_tests
I0814 11:21:36.619] Running command: run_crd_tests
I0814 11:21:36.636] 
... skipping 245 lines ...
I0814 11:21:41.221] foo.company.com/test patched
I0814 11:21:41.312] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0814 11:21:41.395] (Bfoo.company.com/test patched
I0814 11:21:41.486] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0814 11:21:41.572] (Bfoo.company.com/test patched
I0814 11:21:41.665] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0814 11:21:41.818] (B+++ [0814 11:21:41] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0814 11:21:41.885] {
I0814 11:21:41.886]     "apiVersion": "company.com/v1",
I0814 11:21:41.886]     "kind": "Foo",
I0814 11:21:41.886]     "metadata": {
I0814 11:21:41.886]         "annotations": {
I0814 11:21:41.886]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 344 lines ...
I0814 11:22:09.447] (Bnamespace/non-native-resources created
I0814 11:22:09.610] bar.company.com/test created
I0814 11:22:09.704] crd.sh:455: Successful get bars {{len .items}}: 1
I0814 11:22:09.780] (Bnamespace "non-native-resources" deleted
I0814 11:22:15.004] crd.sh:458: Successful get bars {{len .items}}: 0
I0814 11:22:15.169] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0814 11:22:15.270] Error from server (NotFound): namespaces "non-native-resources" not found
I0814 11:22:15.370] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0814 11:22:15.371] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 11:22:15.478] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0814 11:22:15.505] +++ exit code: 0
I0814 11:22:15.537] Recording: run_cmd_with_img_tests
I0814 11:22:15.537] Running command: run_cmd_with_img_tests
... skipping 7 lines ...
I0814 11:22:15.724] +++ [0814 11:22:15] Testing cmd with image
I0814 11:22:15.822] Successful
I0814 11:22:15.823] message:deployment.apps/test1 created
I0814 11:22:15.823] has:deployment.apps/test1 created
I0814 11:22:15.910] deployment.apps "test1" deleted
I0814 11:22:15.989] Successful
I0814 11:22:15.989] message:error: Invalid image name "InvalidImageName": invalid reference format
I0814 11:22:15.989] has:error: Invalid image name "InvalidImageName": invalid reference format
I0814 11:22:16.000] +++ exit code: 0
I0814 11:22:16.056] +++ [0814 11:22:16] Testing recursive resources
I0814 11:22:16.062] +++ [0814 11:22:16] Creating namespace namespace-1565781736-32399
I0814 11:22:16.139] namespace/namespace-1565781736-32399 created
I0814 11:22:16.215] Context "test" modified.
I0814 11:22:16.315] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:22:16.584] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:16.585] (BSuccessful
I0814 11:22:16.586] message:pod/busybox0 created
I0814 11:22:16.586] pod/busybox1 created
I0814 11:22:16.587] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 11:22:16.587] has:error validating data: kind not set
I0814 11:22:16.682] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:16.857] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0814 11:22:16.859] (BSuccessful
I0814 11:22:16.860] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:22:16.860] has:Object 'Kind' is missing
I0814 11:22:16.952] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:17.255] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 11:22:17.257] (BSuccessful
I0814 11:22:17.257] message:pod/busybox0 replaced
I0814 11:22:17.257] pod/busybox1 replaced
I0814 11:22:17.258] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 11:22:17.258] has:error validating data: kind not set
I0814 11:22:17.361] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:17.460] (BSuccessful
I0814 11:22:17.461] message:Name:         busybox0
I0814 11:22:17.461] Namespace:    namespace-1565781736-32399
I0814 11:22:17.461] Priority:     0
I0814 11:22:17.461] Node:         <none>
... skipping 159 lines ...
I0814 11:22:17.475] has:Object 'Kind' is missing
I0814 11:22:17.557] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:17.746] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0814 11:22:17.748] (BSuccessful
I0814 11:22:17.748] message:pod/busybox0 annotated
I0814 11:22:17.748] pod/busybox1 annotated
I0814 11:22:17.749] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:22:17.749] has:Object 'Kind' is missing
I0814 11:22:17.838] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:18.112] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 11:22:18.114] (BSuccessful
I0814 11:22:18.115] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 11:22:18.115] pod/busybox0 configured
I0814 11:22:18.115] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 11:22:18.115] pod/busybox1 configured
I0814 11:22:18.115] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 11:22:18.115] has:error validating data: kind not set
I0814 11:22:18.202] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:22:18.362] (Bdeployment.apps/nginx created
W0814 11:22:18.463] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 11:22:18.464] I0814 11:22:15.825837   53204 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565781735-31032", Name:"test1", UID:"02219eda-e1b9-4109-85e3-84786096172a", APIVersion:"apps/v1", ResourceVersion:"922", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-9797f89d8 to 1
W0814 11:22:18.465] I0814 11:22:15.832913   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781735-31032", Name:"test1-9797f89d8", UID:"3ad94cda-1be4-4833-971f-36c39fd84250", APIVersion:"apps/v1", ResourceVersion:"923", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-9797f89d8-mgfb4
W0814 11:22:18.465] W0814 11:22:16.183222   49763 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 11:22:18.466] E0814 11:22:16.185714   53204 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:18.466] W0814 11:22:16.283234   49763 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 11:22:18.466] E0814 11:22:16.284895   53204 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:18.467] W0814 11:22:16.379815   49763 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 11:22:18.467] E0814 11:22:16.381212   53204 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:18.467] W0814 11:22:16.490188   49763 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 11:22:18.468] E0814 11:22:16.491730   53204 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:18.468] E0814 11:22:17.187662   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:18.468] E0814 11:22:17.286566   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:18.469] E0814 11:22:17.382852   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:18.469] E0814 11:22:17.493320   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:18.470] E0814 11:22:18.189252   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:18.470] E0814 11:22:18.288264   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:18.470] I0814 11:22:18.367238   53204 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565781736-32399", Name:"nginx", UID:"d253bda1-2339-47ed-8824-4e068fef1413", APIVersion:"apps/v1", ResourceVersion:"947", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0814 11:22:18.471] I0814 11:22:18.371630   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781736-32399", Name:"nginx-bbbbb95b5", UID:"e5a29f1b-8ebf-4179-a357-6d53e4aac86d", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-5gm8x
W0814 11:22:18.471] I0814 11:22:18.385315   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781736-32399", Name:"nginx-bbbbb95b5", UID:"e5a29f1b-8ebf-4179-a357-6d53e4aac86d", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-qd5gd
W0814 11:22:18.472] I0814 11:22:18.393950   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781736-32399", Name:"nginx-bbbbb95b5", UID:"e5a29f1b-8ebf-4179-a357-6d53e4aac86d", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-z6mg6
W0814 11:22:18.472] E0814 11:22:18.391267   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:18.495] E0814 11:22:18.494868   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:22:18.597] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 11:22:18.597] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0814 11:22:18.741] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I0814 11:22:18.743] (BSuccessful
I0814 11:22:18.743] message:apiVersion: extensions/v1beta1
I0814 11:22:18.743] kind: Deployment
... skipping 40 lines ...
I0814 11:22:18.823] deployment.apps "nginx" deleted
I0814 11:22:18.925] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:19.096] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:19.098] (BSuccessful
I0814 11:22:19.098] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0814 11:22:19.098] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 11:22:19.099] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:22:19.099] has:Object 'Kind' is missing
I0814 11:22:19.187] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:19.273] (BSuccessful
I0814 11:22:19.273] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:22:19.273] has:busybox0:busybox1:
I0814 11:22:19.274] Successful
I0814 11:22:19.275] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:22:19.275] has:Object 'Kind' is missing
I0814 11:22:19.365] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:19.461] (Bpod/busybox0 labeled
I0814 11:22:19.462] pod/busybox1 labeled
I0814 11:22:19.462] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:22:19.552] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0814 11:22:19.554] (BSuccessful
I0814 11:22:19.555] message:pod/busybox0 labeled
I0814 11:22:19.555] pod/busybox1 labeled
I0814 11:22:19.556] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:22:19.556] has:Object 'Kind' is missing
I0814 11:22:19.647] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:19.737] (Bpod/busybox0 patched
I0814 11:22:19.737] pod/busybox1 patched
I0814 11:22:19.738] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:22:19.831] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0814 11:22:19.833] (BSuccessful
I0814 11:22:19.833] message:pod/busybox0 patched
I0814 11:22:19.834] pod/busybox1 patched
I0814 11:22:19.834] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:22:19.834] has:Object 'Kind' is missing
I0814 11:22:19.926] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:20.100] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:22:20.103] (BSuccessful
I0814 11:22:20.103] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 11:22:20.103] pod "busybox0" force deleted
I0814 11:22:20.103] pod "busybox1" force deleted
I0814 11:22:20.104] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:22:20.104] has:Object 'Kind' is missing
I0814 11:22:20.191] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:22:20.344] (Breplicationcontroller/busybox0 created
I0814 11:22:20.352] replicationcontroller/busybox1 created
I0814 11:22:20.453] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:20.542] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:20.629] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 11:22:20.713] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 11:22:20.884] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 11:22:20.971] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 11:22:20.974] (BSuccessful
I0814 11:22:20.974] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0814 11:22:20.974] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0814 11:22:20.974] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:22:20.975] has:Object 'Kind' is missing
I0814 11:22:21.055] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0814 11:22:21.135] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0814 11:22:21.232] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:21.319] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 11:22:21.405] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 11:22:21.593] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 11:22:21.681] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 11:22:21.683] (BSuccessful
I0814 11:22:21.684] message:service/busybox0 exposed
I0814 11:22:21.684] service/busybox1 exposed
I0814 11:22:21.685] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:22:21.685] has:Object 'Kind' is missing
I0814 11:22:21.773] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:21.861] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 11:22:21.949] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 11:22:22.152] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0814 11:22:22.239] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0814 11:22:22.241] (BSuccessful
I0814 11:22:22.242] message:replicationcontroller/busybox0 scaled
I0814 11:22:22.242] replicationcontroller/busybox1 scaled
I0814 11:22:22.243] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:22:22.243] has:Object 'Kind' is missing
I0814 11:22:22.334] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:22.514] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:22:22.515] (BSuccessful
I0814 11:22:22.516] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 11:22:22.516] replicationcontroller "busybox0" force deleted
I0814 11:22:22.517] replicationcontroller "busybox1" force deleted
I0814 11:22:22.517] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:22:22.517] has:Object 'Kind' is missing
I0814 11:22:22.597] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:22:22.758] (Bdeployment.apps/nginx1-deployment created
I0814 11:22:22.764] deployment.apps/nginx0-deployment created
W0814 11:22:22.864] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 11:22:22.865] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0814 11:22:22.865] E0814 11:22:19.190651   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.866] E0814 11:22:19.289856   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.866] E0814 11:22:19.395728   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.866] E0814 11:22:19.496466   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.866] E0814 11:22:20.192289   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.867] E0814 11:22:20.291675   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.867] I0814 11:22:20.349550   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781736-32399", Name:"busybox0", UID:"689875f5-78ca-41a9-871d-fa7833f2ca4a", APIVersion:"v1", ResourceVersion:"978", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-82bx9
W0814 11:22:22.867] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 11:22:22.868] I0814 11:22:20.354908   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781736-32399", Name:"busybox1", UID:"9e9e385e-182d-47cf-9407-cc041193c590", APIVersion:"v1", ResourceVersion:"980", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-6trwx
W0814 11:22:22.868] E0814 11:22:20.397502   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.868] E0814 11:22:20.498215   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.869] E0814 11:22:21.193916   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.869] E0814 11:22:21.293240   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.869] E0814 11:22:21.398791   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.870] E0814 11:22:21.499359   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.870] I0814 11:22:22.051026   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781736-32399", Name:"busybox0", UID:"689875f5-78ca-41a9-871d-fa7833f2ca4a", APIVersion:"v1", ResourceVersion:"999", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-7xczv
W0814 11:22:22.870] I0814 11:22:22.062746   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781736-32399", Name:"busybox1", UID:"9e9e385e-182d-47cf-9407-cc041193c590", APIVersion:"v1", ResourceVersion:"1003", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-k84v4
W0814 11:22:22.870] E0814 11:22:22.195558   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.871] E0814 11:22:22.294729   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.871] E0814 11:22:22.400594   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.871] E0814 11:22:22.500948   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:22.872] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 11:22:22.872] I0814 11:22:22.764028   53204 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565781736-32399", Name:"nginx1-deployment", UID:"1dab8f7e-f726-429b-affe-ce3afdf2bd25", APIVersion:"apps/v1", ResourceVersion:"1019", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-84f7f49fb7 to 2
W0814 11:22:22.872] I0814 11:22:22.769330   53204 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565781736-32399", Name:"nginx0-deployment", UID:"d1f235ae-1af3-4ae6-85be-e03b225f0d56", APIVersion:"apps/v1", ResourceVersion:"1021", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57475bf54d to 2
W0814 11:22:22.873] I0814 11:22:22.769756   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781736-32399", Name:"nginx1-deployment-84f7f49fb7", UID:"f3cd2b2e-53f6-4df5-b6aa-5de6f216c1aa", APIVersion:"apps/v1", ResourceVersion:"1020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-p9c8g
W0814 11:22:22.873] I0814 11:22:22.772925   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781736-32399", Name:"nginx0-deployment-57475bf54d", UID:"99e87e70-efa8-4865-abe6-0617f3be8e44", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-rsh77
W0814 11:22:22.874] I0814 11:22:22.774041   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781736-32399", Name:"nginx1-deployment-84f7f49fb7", UID:"f3cd2b2e-53f6-4df5-b6aa-5de6f216c1aa", APIVersion:"apps/v1", ResourceVersion:"1020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-sk4tm
W0814 11:22:22.874] I0814 11:22:22.777419   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781736-32399", Name:"nginx0-deployment-57475bf54d", UID:"99e87e70-efa8-4865-abe6-0617f3be8e44", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-2pf47
I0814 11:22:22.974] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0814 11:22:22.975] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 11:22:23.172] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 11:22:23.174] (BSuccessful
I0814 11:22:23.175] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0814 11:22:23.175] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0814 11:22:23.175] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 11:22:23.175] has:Object 'Kind' is missing
I0814 11:22:23.267] deployment.apps/nginx1-deployment paused
I0814 11:22:23.273] deployment.apps/nginx0-deployment paused
W0814 11:22:23.374] E0814 11:22:23.197000   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:23.374] E0814 11:22:23.295951   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:23.403] E0814 11:22:23.402404   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:23.503] E0814 11:22:23.502709   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:22:23.604] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0814 11:22:23.604] (BSuccessful
I0814 11:22:23.605] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 11:22:23.605] has:Object 'Kind' is missing
I0814 11:22:23.605] deployment.apps/nginx1-deployment resumed
I0814 11:22:23.606] deployment.apps/nginx0-deployment resumed
... skipping 7 lines ...
I0814 11:22:23.705] 1         <none>
I0814 11:22:23.706] 
I0814 11:22:23.706] deployment.apps/nginx0-deployment 
I0814 11:22:23.706] REVISION  CHANGE-CAUSE
I0814 11:22:23.706] 1         <none>
I0814 11:22:23.706] 
I0814 11:22:23.706] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 11:22:23.706] has:nginx0-deployment
I0814 11:22:23.707] Successful
I0814 11:22:23.707] message:deployment.apps/nginx1-deployment 
I0814 11:22:23.708] REVISION  CHANGE-CAUSE
I0814 11:22:23.708] 1         <none>
I0814 11:22:23.708] 
I0814 11:22:23.708] deployment.apps/nginx0-deployment 
I0814 11:22:23.708] REVISION  CHANGE-CAUSE
I0814 11:22:23.708] 1         <none>
I0814 11:22:23.708] 
I0814 11:22:23.708] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 11:22:23.708] has:nginx1-deployment
I0814 11:22:23.709] Successful
I0814 11:22:23.709] message:deployment.apps/nginx1-deployment 
I0814 11:22:23.710] REVISION  CHANGE-CAUSE
I0814 11:22:23.710] 1         <none>
I0814 11:22:23.710] 
I0814 11:22:23.710] deployment.apps/nginx0-deployment 
I0814 11:22:23.710] REVISION  CHANGE-CAUSE
I0814 11:22:23.710] 1         <none>
I0814 11:22:23.710] 
I0814 11:22:23.710] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 11:22:23.711] has:Object 'Kind' is missing
I0814 11:22:23.792] deployment.apps "nginx1-deployment" force deleted
I0814 11:22:23.797] deployment.apps "nginx0-deployment" force deleted
W0814 11:22:23.898] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 11:22:23.899] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0814 11:22:24.199] E0814 11:22:24.198590   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:24.299] E0814 11:22:24.298343   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:24.405] E0814 11:22:24.404461   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:24.505] E0814 11:22:24.504340   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:22:24.892] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:22:25.056] (Breplicationcontroller/busybox0 created
I0814 11:22:25.061] replicationcontroller/busybox1 created
I0814 11:22:25.160] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:22:25.253] (BSuccessful
I0814 11:22:25.254] message:no rollbacker has been implemented for "ReplicationController"
... skipping 2 lines ...
I0814 11:22:25.255] has:no rollbacker has been implemented for "ReplicationController"
I0814 11:22:25.255] Successful
I0814 11:22:25.256] message:no rollbacker has been implemented for "ReplicationController"
I0814 11:22:25.256] no rollbacker has been implemented for "ReplicationController"
I0814 11:22:25.256] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:22:25.257] has:Object 'Kind' is missing
W0814 11:22:25.357] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 11:22:25.358] I0814 11:22:25.062215   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781736-32399", Name:"busybox0", UID:"9d85ec6c-56b3-4497-bbbb-f70ffbb2d5a9", APIVersion:"v1", ResourceVersion:"1070", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-m9jtg
W0814 11:22:25.358] I0814 11:22:25.069106   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781736-32399", Name:"busybox1", UID:"a62f6f7d-e0ff-4f35-9a5d-b65baf918c81", APIVersion:"v1", ResourceVersion:"1071", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-grs6k
W0814 11:22:25.359] E0814 11:22:25.200230   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:25.359] E0814 11:22:25.299906   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:25.406] E0814 11:22:25.405994   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:25.506] E0814 11:22:25.505971   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:25.530] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 11:22:25.546] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:22:25.647] Successful
I0814 11:22:25.647] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:22:25.647] error: replicationcontrollers "busybox0" pausing is not supported
I0814 11:22:25.648] error: replicationcontrollers "busybox1" pausing is not supported
I0814 11:22:25.648] has:Object 'Kind' is missing
I0814 11:22:25.648] Successful
I0814 11:22:25.648] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:22:25.648] error: replicationcontrollers "busybox0" pausing is not supported
I0814 11:22:25.648] error: replicationcontrollers "busybox1" pausing is not supported
I0814 11:22:25.649] has:replicationcontrollers "busybox0" pausing is not supported
I0814 11:22:25.649] Successful
I0814 11:22:25.649] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:22:25.649] error: replicationcontrollers "busybox0" pausing is not supported
I0814 11:22:25.649] error: replicationcontrollers "busybox1" pausing is not supported
I0814 11:22:25.649] has:replicationcontrollers "busybox1" pausing is not supported
I0814 11:22:25.650] Successful
I0814 11:22:25.650] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:22:25.650] error: replicationcontrollers "busybox0" resuming is not supported
I0814 11:22:25.650] error: replicationcontrollers "busybox1" resuming is not supported
I0814 11:22:25.650] has:Object 'Kind' is missing
I0814 11:22:25.650] Successful
I0814 11:22:25.651] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:22:25.651] error: replicationcontrollers "busybox0" resuming is not supported
I0814 11:22:25.651] error: replicationcontrollers "busybox1" resuming is not supported
I0814 11:22:25.651] has:replicationcontrollers "busybox0" resuming is not supported
I0814 11:22:25.651] Successful
I0814 11:22:25.652] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:22:25.652] error: replicationcontrollers "busybox0" resuming is not supported
I0814 11:22:25.652] error: replicationcontrollers "busybox1" resuming is not supported
I0814 11:22:25.652] has:replicationcontrollers "busybox0" resuming is not supported
I0814 11:22:25.652] replicationcontroller "busybox0" force deleted
I0814 11:22:25.652] replicationcontroller "busybox1" force deleted
W0814 11:22:26.202] E0814 11:22:26.202167   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:26.304] E0814 11:22:26.303347   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:26.408] E0814 11:22:26.407677   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:26.508] E0814 11:22:26.507860   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:22:26.609] Recording: run_namespace_tests
I0814 11:22:26.609] Running command: run_namespace_tests
I0814 11:22:26.609] 
I0814 11:22:26.609] +++ Running case: test-cmd.run_namespace_tests 
I0814 11:22:26.610] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:22:26.610] +++ command: run_namespace_tests
I0814 11:22:26.610] +++ [0814 11:22:26] Testing kubectl(v1:namespaces)
I0814 11:22:26.664] namespace/my-namespace created
I0814 11:22:26.758] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 11:22:26.835] (Bnamespace "my-namespace" deleted
W0814 11:22:27.204] E0814 11:22:27.203854   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:27.306] E0814 11:22:27.305337   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:27.410] E0814 11:22:27.409406   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:27.510] E0814 11:22:27.509807   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:28.206] E0814 11:22:28.205624   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:28.307] E0814 11:22:28.307003   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:28.411] E0814 11:22:28.410895   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:28.512] E0814 11:22:28.511938   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:29.207] E0814 11:22:29.207150   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:29.310] E0814 11:22:29.309640   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:29.414] E0814 11:22:29.413285   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:29.514] E0814 11:22:29.513652   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:30.209] E0814 11:22:30.209203   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:30.312] E0814 11:22:30.311724   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:30.415] E0814 11:22:30.415035   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:30.516] E0814 11:22:30.515395   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:31.211] E0814 11:22:31.210916   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:31.314] E0814 11:22:31.313332   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:31.417] E0814 11:22:31.416648   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:31.517] E0814 11:22:31.516830   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:22:31.950] namespace/my-namespace condition met
I0814 11:22:32.037] Successful
I0814 11:22:32.038] message:Error from server (NotFound): namespaces "my-namespace" not found
I0814 11:22:32.039] has: not found
I0814 11:22:32.113] namespace/my-namespace created
I0814 11:22:32.210] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 11:22:32.456] (BSuccessful
I0814 11:22:32.456] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 11:22:32.456] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0814 11:22:32.460] namespace "namespace-1565781693-2935" deleted
I0814 11:22:32.460] namespace "namespace-1565781694-17795" deleted
I0814 11:22:32.461] namespace "namespace-1565781696-10578" deleted
I0814 11:22:32.461] namespace "namespace-1565781697-18530" deleted
I0814 11:22:32.461] namespace "namespace-1565781735-31032" deleted
I0814 11:22:32.461] namespace "namespace-1565781736-32399" deleted
I0814 11:22:32.461] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 11:22:32.461] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 11:22:32.461] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 11:22:32.461] has:warning: deleting cluster-scoped resources
I0814 11:22:32.461] Successful
I0814 11:22:32.462] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 11:22:32.462] namespace "kube-node-lease" deleted
I0814 11:22:32.462] namespace "my-namespace" deleted
I0814 11:22:32.462] namespace "namespace-1565781602-13316" deleted
... skipping 27 lines ...
I0814 11:22:32.465] namespace "namespace-1565781693-2935" deleted
I0814 11:22:32.465] namespace "namespace-1565781694-17795" deleted
I0814 11:22:32.465] namespace "namespace-1565781696-10578" deleted
I0814 11:22:32.465] namespace "namespace-1565781697-18530" deleted
I0814 11:22:32.465] namespace "namespace-1565781735-31032" deleted
I0814 11:22:32.465] namespace "namespace-1565781736-32399" deleted
I0814 11:22:32.465] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 11:22:32.466] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 11:22:32.466] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 11:22:32.466] has:namespace "my-namespace" deleted
I0814 11:22:32.562] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0814 11:22:32.637] (Bnamespace/other created
I0814 11:22:32.726] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0814 11:22:32.816] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:22:32.975] (Bpod/valid-pod created
I0814 11:22:33.072] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:22:33.172] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:22:33.255] (BSuccessful
I0814 11:22:33.256] message:error: a resource cannot be retrieved by name across all namespaces
I0814 11:22:33.256] has:a resource cannot be retrieved by name across all namespaces
I0814 11:22:33.345] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:22:33.427] (Bpod "valid-pod" force deleted
I0814 11:22:33.522] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:22:33.599] (Bnamespace "other" deleted
W0814 11:22:33.700] E0814 11:22:32.212830   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:33.700] E0814 11:22:32.314687   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:33.701] E0814 11:22:32.417955   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:33.701] E0814 11:22:32.518394   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:33.701] E0814 11:22:33.214517   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:33.702] E0814 11:22:33.316516   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:33.702] I0814 11:22:33.321981   53204 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0814 11:22:33.702] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 11:22:33.702] E0814 11:22:33.419260   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:33.703] I0814 11:22:33.422374   53204 controller_utils.go:1036] Caches are synced for resource quota controller
W0814 11:22:33.703] E0814 11:22:33.520009   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:33.739] I0814 11:22:33.738837   53204 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0814 11:22:33.840] I0814 11:22:33.839427   53204 controller_utils.go:1036] Caches are synced for garbage collector controller
W0814 11:22:34.217] E0814 11:22:34.216190   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:34.319] E0814 11:22:34.318361   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:34.421] E0814 11:22:34.420924   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:34.522] E0814 11:22:34.521644   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:35.218] E0814 11:22:35.217858   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:35.320] E0814 11:22:35.320091   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:35.423] E0814 11:22:35.422506   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:35.524] E0814 11:22:35.523477   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:35.791] I0814 11:22:35.790498   53204 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1565781736-32399
W0814 11:22:35.796] I0814 11:22:35.795395   53204 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1565781736-32399
W0814 11:22:36.220] E0814 11:22:36.219303   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:36.322] E0814 11:22:36.321751   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:36.424] E0814 11:22:36.424256   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:36.525] E0814 11:22:36.525056   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:37.222] E0814 11:22:37.221377   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:37.324] E0814 11:22:37.323346   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:37.426] E0814 11:22:37.426210   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:37.527] E0814 11:22:37.526645   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:38.223] E0814 11:22:38.223071   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:38.326] E0814 11:22:38.325909   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:38.430] E0814 11:22:38.429551   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:38.529] E0814 11:22:38.528295   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:22:38.717] +++ exit code: 0
I0814 11:22:38.749] Recording: run_secrets_test
I0814 11:22:38.749] Running command: run_secrets_test
I0814 11:22:38.767] 
I0814 11:22:38.769] +++ Running case: test-cmd.run_secrets_test 
I0814 11:22:38.771] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 58 lines ...
I0814 11:22:40.645] (Bsecret "test-secret" deleted
I0814 11:22:40.722] secret/test-secret created
I0814 11:22:40.807] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 11:22:40.888] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0814 11:22:40.966] (Bsecret "test-secret" deleted
W0814 11:22:41.067] I0814 11:22:39.010149   70197 loader.go:375] Config loaded from file:  /tmp/tmp.51qpBwgKUj/.kube/config
W0814 11:22:41.067] E0814 11:22:39.224550   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:41.067] E0814 11:22:39.327235   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:41.068] E0814 11:22:39.430728   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:41.068] E0814 11:22:39.529642   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:41.068] E0814 11:22:40.225702   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:41.068] E0814 11:22:40.328576   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:41.068] E0814 11:22:40.432092   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:41.069] E0814 11:22:40.531020   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:22:41.169] secret/secret-string-data created
I0814 11:22:41.205] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 11:22:41.293] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 11:22:41.376] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0814 11:22:41.450] (Bsecret "secret-string-data" deleted
I0814 11:22:41.540] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:22:41.694] (Bsecret "test-secret" deleted
I0814 11:22:41.771] namespace "test-secrets" deleted
W0814 11:22:41.872] E0814 11:22:41.227086   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:41.872] E0814 11:22:41.329865   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:41.872] E0814 11:22:41.433447   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:41.872] E0814 11:22:41.532130   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:42.229] E0814 11:22:42.228846   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:42.332] E0814 11:22:42.331330   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:42.435] E0814 11:22:42.434899   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:42.534] E0814 11:22:42.533571   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:43.231] E0814 11:22:43.230486   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:43.333] E0814 11:22:43.332745   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:43.437] E0814 11:22:43.436334   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:43.535] E0814 11:22:43.535233   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:44.232] E0814 11:22:44.232114   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:44.335] E0814 11:22:44.334248   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:44.438] E0814 11:22:44.437793   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:44.537] E0814 11:22:44.536726   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:45.234] E0814 11:22:45.233819   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:45.336] E0814 11:22:45.335935   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:45.440] E0814 11:22:45.439974   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:45.539] E0814 11:22:45.538332   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:46.236] E0814 11:22:46.235345   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:46.338] E0814 11:22:46.337640   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:46.442] E0814 11:22:46.441433   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:46.540] E0814 11:22:46.540219   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:22:46.881] +++ exit code: 0
I0814 11:22:46.913] Recording: run_configmap_tests
I0814 11:22:46.913] Running command: run_configmap_tests
I0814 11:22:46.933] 
I0814 11:22:46.935] +++ Running case: test-cmd.run_configmap_tests 
I0814 11:22:46.937] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:22:46.939] +++ command: run_configmap_tests
I0814 11:22:46.950] +++ [0814 11:22:46] Creating namespace namespace-1565781766-30375
I0814 11:22:47.024] namespace/namespace-1565781766-30375 created
I0814 11:22:47.097] Context "test" modified.
I0814 11:22:47.103] +++ [0814 11:22:47] Testing configmaps
W0814 11:22:47.237] E0814 11:22:47.236904   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:22:47.338] configmap/test-configmap created
I0814 11:22:47.400] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0814 11:22:47.481] (Bconfigmap "test-configmap" deleted
I0814 11:22:47.573] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I0814 11:22:47.645] (Bnamespace/test-configmaps created
I0814 11:22:47.738] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
... skipping 3 lines ...
I0814 11:22:48.055] configmap/test-binary-configmap created
I0814 11:22:48.144] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0814 11:22:48.229] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0814 11:22:48.477] (Bconfigmap "test-configmap" deleted
I0814 11:22:48.563] configmap "test-binary-configmap" deleted
I0814 11:22:48.646] namespace "test-configmaps" deleted
W0814 11:22:48.747] E0814 11:22:47.339108   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:48.748] E0814 11:22:47.443002   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:48.748] E0814 11:22:47.541560   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:48.748] E0814 11:22:48.238432   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:48.749] E0814 11:22:48.340587   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:48.749] E0814 11:22:48.446318   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:48.749] E0814 11:22:48.543411   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:49.241] E0814 11:22:49.240341   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:49.343] E0814 11:22:49.342809   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:49.449] E0814 11:22:49.448187   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:49.546] E0814 11:22:49.545656   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:50.243] E0814 11:22:50.242308   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:50.345] E0814 11:22:50.344750   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:50.450] E0814 11:22:50.449840   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:50.548] E0814 11:22:50.547348   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:51.244] E0814 11:22:51.243912   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:51.347] E0814 11:22:51.346976   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:51.452] E0814 11:22:51.451270   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:51.549] E0814 11:22:51.549132   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:52.246] E0814 11:22:52.245652   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:52.349] E0814 11:22:52.348875   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:52.453] E0814 11:22:52.453026   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:52.551] E0814 11:22:52.550980   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:53.248] E0814 11:22:53.247816   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:53.352] E0814 11:22:53.351311   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:53.455] E0814 11:22:53.454918   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:53.554] E0814 11:22:53.553217   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:22:53.799] +++ exit code: 0
I0814 11:22:53.837] Recording: run_client_config_tests
I0814 11:22:53.837] Running command: run_client_config_tests
I0814 11:22:53.859] 
I0814 11:22:53.861] +++ Running case: test-cmd.run_client_config_tests 
I0814 11:22:53.864] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:22:53.866] +++ command: run_client_config_tests
I0814 11:22:53.880] +++ [0814 11:22:53] Creating namespace namespace-1565781773-20881
I0814 11:22:53.960] namespace/namespace-1565781773-20881 created
I0814 11:22:54.046] Context "test" modified.
I0814 11:22:54.054] +++ [0814 11:22:54] Testing client config
I0814 11:22:54.131] Successful
I0814 11:22:54.132] message:error: stat missing: no such file or directory
I0814 11:22:54.132] has:missing: no such file or directory
I0814 11:22:54.205] Successful
I0814 11:22:54.205] message:error: stat missing: no such file or directory
I0814 11:22:54.206] has:missing: no such file or directory
I0814 11:22:54.277] Successful
I0814 11:22:54.278] message:error: stat missing: no such file or directory
I0814 11:22:54.278] has:missing: no such file or directory
I0814 11:22:54.361] Successful
I0814 11:22:54.362] message:Error in configuration: context was not found for specified context: missing-context
I0814 11:22:54.362] has:context was not found for specified context: missing-context
I0814 11:22:54.438] Successful
I0814 11:22:54.438] message:error: no server found for cluster "missing-cluster"
I0814 11:22:54.438] has:no server found for cluster "missing-cluster"
I0814 11:22:54.516] Successful
I0814 11:22:54.517] message:error: auth info "missing-user" does not exist
I0814 11:22:54.517] has:auth info "missing-user" does not exist
W0814 11:22:54.618] E0814 11:22:54.249725   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:54.618] E0814 11:22:54.353618   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:54.619] E0814 11:22:54.456579   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:54.619] E0814 11:22:54.554834   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:22:54.719] Successful
I0814 11:22:54.720] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0814 11:22:54.720] has:error loading config file
I0814 11:22:54.744] Successful
I0814 11:22:54.745] message:error: stat missing-config: no such file or directory
I0814 11:22:54.746] has:no such file or directory
I0814 11:22:54.756] +++ exit code: 0
I0814 11:22:54.793] Recording: run_service_accounts_tests
I0814 11:22:54.794] Running command: run_service_accounts_tests
I0814 11:22:54.817] 
I0814 11:22:54.819] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0814 11:22:55.191] (Bnamespace/test-service-accounts created
I0814 11:22:55.290] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0814 11:22:55.372] (Bserviceaccount/test-service-account created
I0814 11:22:55.470] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0814 11:22:55.553] (Bserviceaccount "test-service-account" deleted
I0814 11:22:55.644] namespace "test-service-accounts" deleted
W0814 11:22:55.745] E0814 11:22:55.251232   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:55.745] E0814 11:22:55.355240   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:55.746] E0814 11:22:55.458288   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:55.746] E0814 11:22:55.556004   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:56.253] E0814 11:22:56.253070   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:56.358] E0814 11:22:56.357090   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:56.460] E0814 11:22:56.460008   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:56.558] E0814 11:22:56.557911   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:57.255] E0814 11:22:57.254907   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:57.359] E0814 11:22:57.358798   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:57.462] E0814 11:22:57.461918   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:57.560] E0814 11:22:57.559846   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:58.257] E0814 11:22:58.256476   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:58.361] E0814 11:22:58.360500   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:58.464] E0814 11:22:58.463629   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:58.563] E0814 11:22:58.562257   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:59.259] E0814 11:22:59.258438   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:59.363] E0814 11:22:59.362335   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:59.466] E0814 11:22:59.465619   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:22:59.564] E0814 11:22:59.563857   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:00.261] E0814 11:23:00.260249   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:00.364] E0814 11:23:00.364053   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:00.468] E0814 11:23:00.467312   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:00.568] E0814 11:23:00.567359   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:23:00.775] +++ exit code: 0
I0814 11:23:00.812] Recording: run_job_tests
I0814 11:23:00.813] Running command: run_job_tests
I0814 11:23:00.835] 
I0814 11:23:00.837] +++ Running case: test-cmd.run_job_tests 
I0814 11:23:00.840] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0814 11:23:01.608] Labels:                        run=pi
I0814 11:23:01.609] Annotations:                   <none>
I0814 11:23:01.609] Schedule:                      59 23 31 2 *
I0814 11:23:01.609] Concurrency Policy:            Allow
I0814 11:23:01.609] Suspend:                       False
I0814 11:23:01.610] Successful Job History Limit:  3
I0814 11:23:01.610] Failed Job History Limit:      1
I0814 11:23:01.610] Starting Deadline Seconds:     <unset>
I0814 11:23:01.610] Selector:                      <unset>
I0814 11:23:01.610] Parallelism:                   <unset>
I0814 11:23:01.610] Completions:                   <unset>
I0814 11:23:01.610] Pod Template:
I0814 11:23:01.610]   Labels:  run=pi
... skipping 32 lines ...
I0814 11:23:02.151]                 run=pi
I0814 11:23:02.151] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0814 11:23:02.151] Controlled By:  CronJob/pi
I0814 11:23:02.151] Parallelism:    1
I0814 11:23:02.151] Completions:    1
I0814 11:23:02.151] Start Time:     Wed, 14 Aug 2019 11:23:01 +0000
I0814 11:23:02.152] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0814 11:23:02.152] Pod Template:
I0814 11:23:02.152]   Labels:  controller-uid=10fbaafd-0f67-44b4-9656-e3c4f2d194e9
I0814 11:23:02.152]            job-name=test-job
I0814 11:23:02.152]            run=pi
I0814 11:23:02.152]   Containers:
I0814 11:23:02.152]    pi:
... skipping 15 lines ...
I0814 11:23:02.156]   Type    Reason            Age   From            Message
I0814 11:23:02.156]   ----    ------            ----  ----            -------
I0814 11:23:02.157]   Normal  SuccessfulCreate  1s    job-controller  Created pod: test-job-24x62
I0814 11:23:02.236] job.batch "test-job" deleted
I0814 11:23:02.326] cronjob.batch "pi" deleted
I0814 11:23:02.410] namespace "test-jobs" deleted
W0814 11:23:02.510] E0814 11:23:01.261829   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:02.511] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 11:23:02.511] E0814 11:23:01.365488   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:02.511] E0814 11:23:01.468937   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:02.512] E0814 11:23:01.568836   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:02.512] I0814 11:23:01.877600   53204 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"10fbaafd-0f67-44b4-9656-e3c4f2d194e9", APIVersion:"batch/v1", ResourceVersion:"1350", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-24x62
W0814 11:23:02.512] E0814 11:23:02.263625   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:02.512] E0814 11:23:02.367096   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:02.512] E0814 11:23:02.470775   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:02.571] E0814 11:23:02.570476   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:03.266] E0814 11:23:03.265524   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:03.369] E0814 11:23:03.368936   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:03.473] E0814 11:23:03.472614   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:03.572] E0814 11:23:03.572166   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:04.268] E0814 11:23:04.267319   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:04.371] E0814 11:23:04.370485   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:04.475] E0814 11:23:04.474414   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:04.574] E0814 11:23:04.573858   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:05.269] E0814 11:23:05.269086   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:05.372] E0814 11:23:05.372156   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:05.476] E0814 11:23:05.476130   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:05.576] E0814 11:23:05.576096   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:06.271] E0814 11:23:06.270893   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:06.374] E0814 11:23:06.374189   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:06.478] E0814 11:23:06.477865   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:06.578] E0814 11:23:06.577863   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:07.273] E0814 11:23:07.272368   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:07.376] E0814 11:23:07.375835   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:07.479] E0814 11:23:07.478772   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:07.579] E0814 11:23:07.579213   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:23:07.680] +++ exit code: 0
I0814 11:23:07.680] Recording: run_create_job_tests
I0814 11:23:07.681] Running command: run_create_job_tests
I0814 11:23:07.681] 
I0814 11:23:07.681] +++ Running case: test-cmd.run_create_job_tests 
I0814 11:23:07.681] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 29 lines ...
I0814 11:23:09.240] (Bpodtemplate/nginx created
I0814 11:23:09.338] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 11:23:09.415] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0814 11:23:09.416] nginx   nginx        nginx    name=nginx
W0814 11:23:09.516] I0814 11:23:07.849283   53204 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565781787-5028", Name:"test-job", UID:"08110534-d976-43e2-a7b9-8c0097e236b1", APIVersion:"batch/v1", ResourceVersion:"1368", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-gh7df
W0814 11:23:09.517] I0814 11:23:08.110871   53204 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565781787-5028", Name:"test-job-pi", UID:"4f9edd8a-187e-4778-9158-43e6f4eea86a", APIVersion:"batch/v1", ResourceVersion:"1375", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-w5bpj
W0814 11:23:09.518] E0814 11:23:08.274078   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:09.518] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 11:23:09.518] E0814 11:23:08.377479   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:09.519] I0814 11:23:08.477836   53204 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565781787-5028", Name:"my-pi", UID:"a2502458-027c-42ec-b0c1-67aef9862865", APIVersion:"batch/v1", ResourceVersion:"1383", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-mg756
W0814 11:23:09.519] E0814 11:23:08.480896   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:09.519] E0814 11:23:08.581106   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:09.519] I0814 11:23:09.237673   49763 controller.go:606] quota admission added evaluator for: podtemplates
W0814 11:23:09.520] E0814 11:23:09.275671   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:09.520] E0814 11:23:09.379001   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:09.520] E0814 11:23:09.483217   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:09.583] E0814 11:23:09.582633   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:23:09.684] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 11:23:09.684] (Bpodtemplate "nginx" deleted
I0814 11:23:09.777] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:23:09.790] (B+++ exit code: 0
I0814 11:23:09.822] Recording: run_service_tests
I0814 11:23:09.823] Running command: run_service_tests
... skipping 65 lines ...
I0814 11:23:10.716] Port:              <unset>  6379/TCP
I0814 11:23:10.716] TargetPort:        6379/TCP
I0814 11:23:10.717] Endpoints:         <none>
I0814 11:23:10.717] Session Affinity:  None
I0814 11:23:10.717] Events:            <none>
I0814 11:23:10.717] (B
W0814 11:23:10.817] E0814 11:23:10.277192   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:10.818] E0814 11:23:10.380602   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:10.818] E0814 11:23:10.484767   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:10.819] E0814 11:23:10.584451   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:23:10.919] Successful describe services:
I0814 11:23:10.919] Name:              kubernetes
I0814 11:23:10.919] Namespace:         default
I0814 11:23:10.920] Labels:            component=apiserver
I0814 11:23:10.920]                    provider=kubernetes
I0814 11:23:10.920] Annotations:       <none>
... skipping 238 lines ...
I0814 11:23:11.781]   selector:
I0814 11:23:11.781]     role: padawan
I0814 11:23:11.781]   sessionAffinity: None
I0814 11:23:11.781]   type: ClusterIP
I0814 11:23:11.781] status:
I0814 11:23:11.781]   loadBalancer: {}
W0814 11:23:11.882] E0814 11:23:11.278748   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:11.882] E0814 11:23:11.382415   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:11.883] E0814 11:23:11.486135   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:11.883] E0814 11:23:11.585912   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:11.883] error: you must specify resources by --filename when --local is set.
W0814 11:23:11.883] Example resource specifications include:
W0814 11:23:11.883]    '-f rsrc.yaml'
W0814 11:23:11.883]    '--filename=rsrc.json'
I0814 11:23:11.984] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0814 11:23:12.104] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 11:23:12.184] (Bservice "redis-master" deleted
... skipping 2 lines ...
I0814 11:23:12.535] (Bservice/redis-master created
I0814 11:23:12.634] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 11:23:12.726] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 11:23:12.886] (Bservice/service-v1-test created
I0814 11:23:12.983] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0814 11:23:13.188] (Bservice/service-v1-test replaced
W0814 11:23:13.289] E0814 11:23:12.280627   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:13.290] E0814 11:23:12.384050   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:13.291] E0814 11:23:12.488114   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:13.291] E0814 11:23:12.587361   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:13.291] E0814 11:23:13.282899   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:13.385] E0814 11:23:13.385284   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:23:13.487] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0814 11:23:13.487] (Bservice "redis-master" deleted
I0814 11:23:13.488] service "service-v1-test" deleted
I0814 11:23:13.566] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 11:23:13.655] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 11:23:13.817] (Bservice/redis-master created
W0814 11:23:13.918] E0814 11:23:13.489814   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:13.918] E0814 11:23:13.589160   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:23:14.019] service/redis-slave created
I0814 11:23:14.081] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0814 11:23:14.170] (BSuccessful
I0814 11:23:14.170] message:NAME           RSRC
I0814 11:23:14.171] kubernetes     144
I0814 11:23:14.171] redis-master   1418
... skipping 29 lines ...
I0814 11:23:15.835] +++ [0814 11:23:15] Creating namespace namespace-1565781795-16954
I0814 11:23:15.908] namespace/namespace-1565781795-16954 created
I0814 11:23:15.982] Context "test" modified.
I0814 11:23:15.989] +++ [0814 11:23:15] Testing kubectl(v1:daemonsets)
I0814 11:23:16.076] apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:23:16.259] (Bdaemonset.apps/bind created
W0814 11:23:16.360] E0814 11:23:14.284848   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:16.360] E0814 11:23:14.386887   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:16.360] E0814 11:23:14.491455   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:16.360] E0814 11:23:14.591107   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:16.361] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 11:23:16.361] I0814 11:23:15.160516   53204 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"2f4e2b25-ccf5-4cf3-866c-8ef042a9211e", APIVersion:"apps/v1", ResourceVersion:"1434", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-6cdd84c77d to 2
W0814 11:23:16.361] I0814 11:23:15.166413   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"7d5e5a2f-3dcf-43d2-a9d1-14d82cda82fa", APIVersion:"apps/v1", ResourceVersion:"1435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-8zbdp
W0814 11:23:16.362] I0814 11:23:15.170619   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"7d5e5a2f-3dcf-43d2-a9d1-14d82cda82fa", APIVersion:"apps/v1", ResourceVersion:"1435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-qnd7x
W0814 11:23:16.362] E0814 11:23:15.286572   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:16.362] E0814 11:23:15.388551   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:16.362] E0814 11:23:15.493022   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:16.362] E0814 11:23:15.592793   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:16.363] I0814 11:23:16.255502   49763 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0814 11:23:16.363] I0814 11:23:16.268801   49763 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0814 11:23:16.363] E0814 11:23:16.288644   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:16.390] E0814 11:23:16.390056   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:23:16.491] apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
I0814 11:23:16.549] (Bdaemonset.apps/bind configured
I0814 11:23:16.646] apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
I0814 11:23:16.742] (Bdaemonset.apps/bind image updated
I0814 11:23:16.833] apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
I0814 11:23:16.924] (Bdaemonset.apps/bind env updated
W0814 11:23:17.025] E0814 11:23:16.494667   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:17.026] E0814 11:23:16.594290   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:23:17.126] apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
I0814 11:23:17.135] (Bdaemonset.apps/bind resource requirements updated
I0814 11:23:17.229] apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
I0814 11:23:17.320] (Bdaemonset.apps/bind restarted
I0814 11:23:17.423] apps.sh:48: Successful get daemonsets bind {{.metadata.generation}}: 5
I0814 11:23:17.501] (Bdaemonset.apps "bind" deleted
... skipping 37 lines ...
I0814 11:23:19.232] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0814 11:23:19.322] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0814 11:23:19.427] (Bdaemonset.apps/bind rolled back
I0814 11:23:19.525] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0814 11:23:19.617] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0814 11:23:19.724] (BSuccessful
I0814 11:23:19.725] message:error: unable to find specified revision 1000000 in history
I0814 11:23:19.725] has:unable to find specified revision
I0814 11:23:19.813] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0814 11:23:19.909] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0814 11:23:20.016] (Bdaemonset.apps/bind rolled back
I0814 11:23:20.110] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0814 11:23:20.202] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0814 11:23:21.561] Namespace:    namespace-1565781800-25193
I0814 11:23:21.561] Selector:     app=guestbook,tier=frontend
I0814 11:23:21.561] Labels:       app=guestbook
I0814 11:23:21.561]               tier=frontend
I0814 11:23:21.562] Annotations:  <none>
I0814 11:23:21.562] Replicas:     3 current / 3 desired
I0814 11:23:21.562] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0814 11:23:21.562] Pod Template:
I0814 11:23:21.563]   Labels:  app=guestbook
I0814 11:23:21.563]            tier=frontend
I0814 11:23:21.563]   Containers:
I0814 11:23:21.563]    php-redis:
I0814 11:23:21.564]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0814 11:23:21.674] Namespace:    namespace-1565781800-25193
I0814 11:23:21.674] Selector:     app=guestbook,tier=frontend
I0814 11:23:21.674] Labels:       app=guestbook
I0814 11:23:21.674]               tier=frontend
I0814 11:23:21.674] Annotations:  <none>
I0814 11:23:21.675] Replicas:     3 current / 3 desired
I0814 11:23:21.675] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0814 11:23:21.675] Pod Template:
I0814 11:23:21.675]   Labels:  app=guestbook
I0814 11:23:21.675]            tier=frontend
I0814 11:23:21.675]   Containers:
I0814 11:23:21.675]    php-redis:
I0814 11:23:21.675]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0814 11:23:21.676]   Type    Reason            Age   From                    Message
I0814 11:23:21.676]   ----    ------            ----  ----                    -------
I0814 11:23:21.676]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-ntjlw
I0814 11:23:21.677]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-rncv6
I0814 11:23:21.677]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-wk6m5
I0814 11:23:21.677] (B
W0814 11:23:21.777] E0814 11:23:17.290344   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.778] E0814 11:23:17.391667   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.778] E0814 11:23:17.496025   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.779] E0814 11:23:17.596150   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.779] E0814 11:23:18.291807   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.779] E0814 11:23:18.392896   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.780] E0814 11:23:18.497590   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.780] E0814 11:23:18.597715   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.780] E0814 11:23:19.293442   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.780] E0814 11:23:19.394446   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.784] E0814 11:23:19.448130   53204 daemon_controller.go:302] namespace-1565781797-2608/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1565781797-2608", SelfLink:"/apis/apps/v1/namespaces/namespace-1565781797-2608/daemonsets/bind", UID:"408f8421-2822-4169-b5a0-d088cedf57c4", ResourceVersion:"1499", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701378598, loc:(*time.Location)(0x7213220)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1565781797-2608\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0005ecbe0), Fields:(*v1.Fields)(0xc0005ecc00)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0005ecc20), Fields:(*v1.Fields)(0xc0005ecc40)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0005ecc60), Fields:(*v1.Fields)(0xc0005ecc80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0005ecca0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002039038), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0024417a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0005eccc0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0003bcdc0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00203908c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0814 11:23:21.785] E0814 11:23:19.499311   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.785] E0814 11:23:19.599259   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.785] E0814 11:23:20.294875   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.785] E0814 11:23:20.395718   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.785] E0814 11:23:20.500962   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.786] E0814 11:23:20.601197   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.786] I0814 11:23:20.883482   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781800-25193", Name:"frontend", UID:"fb624068-dc26-4c8d-bbb0-9e0d241e48ba", APIVersion:"v1", ResourceVersion:"1512", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mbvqk
W0814 11:23:21.786] I0814 11:23:20.887578   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781800-25193", Name:"frontend", UID:"fb624068-dc26-4c8d-bbb0-9e0d241e48ba", APIVersion:"v1", ResourceVersion:"1512", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4vrhn
W0814 11:23:21.787] I0814 11:23:20.889399   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781800-25193", Name:"frontend", UID:"fb624068-dc26-4c8d-bbb0-9e0d241e48ba", APIVersion:"v1", ResourceVersion:"1512", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-f87r2
W0814 11:23:21.787] E0814 11:23:21.296381   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.787] I0814 11:23:21.315795   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781800-25193", Name:"frontend", UID:"2061fa15-90f6-4509-bade-1372f07dfa20", APIVersion:"v1", ResourceVersion:"1529", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ntjlw
W0814 11:23:21.788] I0814 11:23:21.320254   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781800-25193", Name:"frontend", UID:"2061fa15-90f6-4509-bade-1372f07dfa20", APIVersion:"v1", ResourceVersion:"1529", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rncv6
W0814 11:23:21.788] I0814 11:23:21.320451   53204 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781800-25193", Name:"frontend", UID:"2061fa15-90f6-4509-bade-1372f07dfa20", APIVersion:"v1", ResourceVersion:"1529", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-wk6m5
W0814 11:23:21.788] E0814 11:23:21.397217   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.788] E0814 11:23:21.502298   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:23:21.789] E0814 11:23:21.602793   53204 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:23:21.889] core.sh:1065: Successful describe
I0814 11:23:21.889] Name:         frontend
I0814 11:23:21.890] Namespace:    namespace-1565781800-25193
I0814 11:23:21.890] Selector:     app=guestbook,tier=frontend
I0814 11:23:21.890] Labels: