This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: WIP: refactor interpod affinity with the scheduling framework
ResultFAILURE
Tests 1 failed / 2470 succeeded
Started2019-08-14 11:04
Elapsed27m37s
Revision
Buildergke-prow-ssd-pool-1a225945-6r1w
Refs master:34791349
80898:172f9844
pod385973db-be83-11e9-8f48-b2e5472b16c0
infra-commit381773791
pod385973db-be83-11e9-8f48-b2e5472b16c0
repok8s.io/kubernetes
repo-commitd4b62c5e51fed217d07e4f2405c0c05915dab255
repos{u'k8s.io/kubernetes': u'master:34791349d656a9f8e45b7093012e29ad08782ffa,80898:172f9844fa958e9469da6f20a3717d65322ac3df'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptWithPermitPlugin 1m4s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptWithPermitPlugin$
=== RUN   TestPreemptWithPermitPlugin
I0814 11:27:41.199634  110612 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0814 11:27:41.199670  110612 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0814 11:27:41.199803  110612 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0814 11:27:41.199827  110612 master.go:234] Using reconciler: 
I0814 11:27:41.201964  110612 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.202125  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.202142  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.202183  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.202233  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.202809  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.202873  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.203065  110612 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0814 11:27:41.203097  110612 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.203179  110612 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0814 11:27:41.203312  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.203324  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.203355  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.203396  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.203675  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.203863  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.203990  110612 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 11:27:41.204019  110612 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.204094  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.204102  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.204125  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.204170  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.204293  110612 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 11:27:41.204524  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.204620  110612 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0814 11:27:41.204649  110612 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.204702  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.204709  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.204730  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.204766  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.204786  110612 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0814 11:27:41.205030  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.205056  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.205278  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.205342  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.205364  110612 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0814 11:27:41.205420  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.205436  110612 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0814 11:27:41.205514  110612 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.205585  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.205596  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.205623  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.205661  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.205731  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.205927  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.205977  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.206022  110612 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0814 11:27:41.206168  110612 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.206195  110612 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0814 11:27:41.206238  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.206248  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.206277  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.206381  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.207012  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.207092  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.207096  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.207147  110612 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0814 11:27:41.207277  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.207302  110612 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0814 11:27:41.207297  110612 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.207361  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.207370  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.207398  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.207439  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.207687  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.207779  110612 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0814 11:27:41.207919  110612 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.207973  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.207980  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.208005  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.208035  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.208057  110612 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0814 11:27:41.208192  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.208409  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.208472  110612 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0814 11:27:41.208625  110612 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.208682  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.208690  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.208713  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.208753  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.208774  110612 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0814 11:27:41.209029  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.209076  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.209236  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.209512  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.209595  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.209600  110612 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0814 11:27:41.209636  110612 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0814 11:27:41.209730  110612 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.209780  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.209786  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.209807  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.209871  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.211721  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.211787  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.211881  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.211922  110612 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0814 11:27:41.211956  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.211993  110612 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0814 11:27:41.212134  110612 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.212213  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.212224  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.212255  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.212306  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.212771  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.212930  110612 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0814 11:27:41.212977  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.213055  110612 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0814 11:27:41.213089  110612 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.213154  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.213164  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.213191  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.213310  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.213504  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.213602  110612 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0814 11:27:41.213717  110612 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.213773  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.213781  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.213802  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.213833  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.213878  110612 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0814 11:27:41.214109  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.214305  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.214495  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.214593  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.214602  110612 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0814 11:27:41.214734  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.214745  110612 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0814 11:27:41.214731  110612 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.214828  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.214837  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.214892  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.214931  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.215201  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.215351  110612 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0814 11:27:41.215384  110612 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.215435  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.215467  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.215467  110612 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0814 11:27:41.215479  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.215510  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.215568  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.215794  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.215959  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.215976  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.216004  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.216045  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.216055  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.216358  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.216516  110612 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.216591  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.216603  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.216635  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.216690  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.216737  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.217295  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.217322  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.217422  110612 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0814 11:27:41.217507  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.217558  110612 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0814 11:27:41.218020  110612 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.218370  110612 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.218964  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.218970  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.218964  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.219607  110612 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.220071  110612 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.220522  110612 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.221620  110612 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.222054  110612 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.222194  110612 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.222366  110612 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.222861  110612 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.223346  110612 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.223511  110612 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.224091  110612 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.224366  110612 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.225069  110612 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.225297  110612 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.225896  110612 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.226099  110612 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.226237  110612 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.226358  110612 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.226500  110612 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.226641  110612 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.226801  110612 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.227524  110612 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.227784  110612 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.228449  110612 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.229077  110612 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.229435  110612 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.229707  110612 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.231317  110612 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.231585  110612 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.232197  110612 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.232986  110612 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.233464  110612 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.234137  110612 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.234392  110612 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.234592  110612 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0814 11:27:41.234619  110612 master.go:434] Enabling API group "authentication.k8s.io".
I0814 11:27:41.234635  110612 master.go:434] Enabling API group "authorization.k8s.io".
I0814 11:27:41.234795  110612 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.234960  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.234980  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.235055  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.235185  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.235736  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.235794  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.235963  110612 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 11:27:41.236124  110612 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 11:27:41.236145  110612 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.236234  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.236252  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.236289  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.236384  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.236761  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.236921  110612 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 11:27:41.237075  110612 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.237132  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.237140  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.237162  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.237210  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.237242  110612 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 11:27:41.237436  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.237719  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.237816  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.237938  110612 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 11:27:41.237965  110612 master.go:434] Enabling API group "autoscaling".
I0814 11:27:41.237974  110612 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 11:27:41.238109  110612 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.238208  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.238218  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.238259  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.238318  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.238641  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.238700  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.238755  110612 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0814 11:27:41.238909  110612 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.238946  110612 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0814 11:27:41.238987  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.238996  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.239021  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.239124  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.239439  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.239562  110612 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0814 11:27:41.239582  110612 master.go:434] Enabling API group "batch".
I0814 11:27:41.239633  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.239734  110612 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.239754  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.239798  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.239808  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.239837  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.239942  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.239973  110612 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0814 11:27:41.240047  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.240301  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.240395  110612 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0814 11:27:41.240418  110612 master.go:434] Enabling API group "certificates.k8s.io".
I0814 11:27:41.240482  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.240513  110612 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0814 11:27:41.240557  110612 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.240622  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.240637  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.240673  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.240729  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.240959  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.241042  110612 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 11:27:41.241079  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.241157  110612 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 11:27:41.241178  110612 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.241518  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.241598  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.241994  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.242023  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.242055  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.242094  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.242143  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.242328  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.242369  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.242416  110612 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 11:27:41.242454  110612 master.go:434] Enabling API group "coordination.k8s.io".
I0814 11:27:41.242583  110612 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 11:27:41.242590  110612 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.242658  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.242669  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.242697  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.242746  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.243039  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.243293  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.243398  110612 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 11:27:41.243420  110612 master.go:434] Enabling API group "extensions".
I0814 11:27:41.243473  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.243556  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.243563  110612 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.243591  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.243615  110612 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 11:27:41.243630  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.243639  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.243669  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.243771  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.244032  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.244059  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.244156  110612 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0814 11:27:41.244208  110612 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0814 11:27:41.244300  110612 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.244357  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.244367  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.244396  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.244476  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.244707  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.244797  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.244809  110612 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 11:27:41.244826  110612 master.go:434] Enabling API group "networking.k8s.io".
I0814 11:27:41.244914  110612 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.244963  110612 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 11:27:41.244990  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.245001  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.245034  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.245090  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.245348  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.245414  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.245490  110612 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0814 11:27:41.245505  110612 master.go:434] Enabling API group "node.k8s.io".
I0814 11:27:41.245520  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.245571  110612 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0814 11:27:41.245629  110612 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.245693  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.245702  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.245730  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.245773  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.246035  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.246058  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.246164  110612 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0814 11:27:41.246298  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.246306  110612 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.246370  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.246396  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.246429  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.246372  110612 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0814 11:27:41.246465  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.246691  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.246763  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.246825  110612 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0814 11:27:41.246886  110612 master.go:434] Enabling API group "policy".
I0814 11:27:41.246919  110612 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.246978  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.246986  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.247013  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.247033  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.247056  110612 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0814 11:27:41.247161  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.247578  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.247652  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.247673  110612 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 11:27:41.247730  110612 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 11:27:41.247813  110612 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.247899  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.247908  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.247945  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.248038  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.248372  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.248541  110612 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 11:27:41.248576  110612 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.248727  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.248746  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.248806  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.248897  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.249059  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.249259  110612 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 11:27:41.249374  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.249412  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.249469  110612 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 11:27:41.249494  110612 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 11:27:41.249611  110612 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.249639  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.249673  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.249684  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.249712  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.249764  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.250795  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.250961  110612 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 11:27:41.250997  110612 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.251123  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.251140  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.251177  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.251302  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.251343  110612 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 11:27:41.251572  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.251804  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.251837  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.252126  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.252126  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.252352  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.252549  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.252699  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.253263  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.254928  110612 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 11:27:41.254979  110612 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 11:27:41.255109  110612 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.255187  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.255197  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.255241  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.255287  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.255565  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.255618  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.255675  110612 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 11:27:41.255702  110612 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.255737  110612 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 11:27:41.255774  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.255785  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.255812  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.255925  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.256292  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.256359  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.256480  110612 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 11:27:41.256523  110612 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 11:27:41.256633  110612 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.256702  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.256713  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.257027  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.257126  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.257811  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.258327  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.258626  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.258663  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.258673  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.258781  110612 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 11:27:41.258813  110612 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0814 11:27:41.258948  110612 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 11:27:41.260610  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.261631  110612 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.261794  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.261823  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.261879  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.261926  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.262201  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.262322  110612 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 11:27:41.262502  110612 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.262586  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.262596  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.262767  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.262914  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.262945  110612 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 11:27:41.262975  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.263719  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.263759  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.263942  110612 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 11:27:41.263973  110612 master.go:434] Enabling API group "scheduling.k8s.io".
I0814 11:27:41.264004  110612 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 11:27:41.264080  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.264104  110612 master.go:423] Skipping disabled API group "settings.k8s.io".
I0814 11:27:41.264274  110612 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.264414  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.264426  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.264469  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.264533  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.264808  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.264828  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.264877  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.265054  110612 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 11:27:41.265122  110612 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 11:27:41.265210  110612 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.265274  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.265285  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.265319  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.265367  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.265660  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.265685  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.265832  110612 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 11:27:41.265952  110612 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.266051  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.266062  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.266094  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.266132  110612 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 11:27:41.266259  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.266267  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.266648  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.266713  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.266869  110612 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0814 11:27:41.266898  110612 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.266954  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.266964  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.266993  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.267035  110612 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0814 11:27:41.267306  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.267374  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.267592  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.267683  110612 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0814 11:27:41.267755  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.267892  110612 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0814 11:27:41.267894  110612 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.268282  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.268292  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.268319  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.268360  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.269045  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.269048  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.269397  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.269480  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.269706  110612 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 11:27:41.269871  110612 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.269929  110612 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 11:27:41.269939  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.269949  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.269982  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.270023  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.270723  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.270792  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.270820  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.270902  110612 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 11:27:41.270921  110612 master.go:434] Enabling API group "storage.k8s.io".
I0814 11:27:41.271079  110612 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.271138  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.271147  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.271176  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.271240  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.271793  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.272070  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.272183  110612 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0814 11:27:41.272212  110612 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0814 11:27:41.272322  110612 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.272370  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.272378  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.272402  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.272456  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.272701  110612 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 11:27:41.272794  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.272944  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.272972  110612 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0814 11:27:41.273007  110612 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0814 11:27:41.273122  110612 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.273186  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.273198  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.273239  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.273284  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.273617  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.273653  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.273861  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.274488  110612 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0814 11:27:41.274559  110612 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0814 11:27:41.274652  110612 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.274720  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.274729  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.274771  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.274827  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.274834  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.275359  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.275396  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.275903  110612 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0814 11:27:41.275958  110612 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0814 11:27:41.276056  110612 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.276127  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.276137  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.276166  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.276214  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.276301  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.277272  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.277309  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.277361  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.277442  110612 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0814 11:27:41.277459  110612 master.go:434] Enabling API group "apps".
I0814 11:27:41.277499  110612 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0814 11:27:41.277493  110612 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.277557  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.277565  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.277667  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.277758  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.278364  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.279375  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.279474  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.279493  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.279726  110612 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 11:27:41.279763  110612 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.279832  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.280005  110612 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 11:27:41.280321  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.280432  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.280547  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.280761  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.281173  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.281270  110612 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 11:27:41.281314  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.281393  110612 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 11:27:41.281772  110612 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.281835  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.281863  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.281897  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.281950  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.282231  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.282263  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.282339  110612 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 11:27:41.282371  110612 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 11:27:41.282364  110612 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.282504  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.282514  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.282549  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.282605  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.282857  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.282974  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.282995  110612 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 11:27:41.283001  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.283011  110612 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0814 11:27:41.283044  110612 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 11:27:41.283042  110612 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.283268  110612 client.go:354] parsed scheme: ""
I0814 11:27:41.283281  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:41.283310  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:41.283355  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.283660  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:41.283699  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:41.283720  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.283803  110612 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 11:27:41.283820  110612 master.go:434] Enabling API group "events.k8s.io".
I0814 11:27:41.283837  110612 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 11:27:41.284067  110612 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.284305  110612 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.284489  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.284626  110612 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.284752  110612 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.284945  110612 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.285073  110612 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.285284  110612 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.285442  110612 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.285551  110612 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.285610  110612 watch_cache.go:405] Replace watchCache (rev: 29654) 
I0814 11:27:41.285666  110612 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.286592  110612 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.286889  110612 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.287826  110612 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.288223  110612 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.289277  110612 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.289623  110612 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.290477  110612 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.290731  110612 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.291398  110612 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.291658  110612 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 11:27:41.291722  110612 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0814 11:27:41.292348  110612 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.292504  110612 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.292750  110612 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.293480  110612 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.294231  110612 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.295078  110612 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.295431  110612 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.296244  110612 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.296923  110612 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.297163  110612 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.297884  110612 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 11:27:41.297949  110612 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0814 11:27:41.298624  110612 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.298917  110612 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.299562  110612 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.300411  110612 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.300925  110612 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.301577  110612 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.302225  110612 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.303001  110612 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.303549  110612 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.304180  110612 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.304785  110612 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 11:27:41.304942  110612 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0814 11:27:41.305540  110612 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.306038  110612 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 11:27:41.306092  110612 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0814 11:27:41.306648  110612 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.307121  110612 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.307364  110612 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.307906  110612 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.308347  110612 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.308740  110612 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.309239  110612 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 11:27:41.309298  110612 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0814 11:27:41.310173  110612 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.310794  110612 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.311096  110612 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.311670  110612 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.311928  110612 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.312104  110612 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.312598  110612 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.312757  110612 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.313016  110612 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.313562  110612 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.313737  110612 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.313925  110612 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 11:27:41.313970  110612 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0814 11:27:41.313976  110612 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0814 11:27:41.314434  110612 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.315040  110612 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.315487  110612 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.315899  110612 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.316426  110612 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8ffbd20b-f366-4485-bb76-f0fbc733837f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 11:27:41.318560  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.318589  110612 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0814 11:27:41.318597  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.318606  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.318612  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.318617  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.318638  110612 httplog.go:90] GET /healthz: (177.463µs) 0 [Go-http-client/1.1 127.0.0.1:42150]
I0814 11:27:41.320358  110612 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.564875ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.323322  110612 httplog.go:90] GET /api/v1/services: (1.205996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.327499  110612 httplog.go:90] GET /api/v1/services: (1.058914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.330017  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.330082  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.330097  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.330106  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.330115  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.330171  110612 httplog.go:90] GET /healthz: (336.932µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.332192  110612 httplog.go:90] GET /api/v1/services: (1.085319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.332600  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.243229ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42150]
I0814 11:27:41.333541  110612 httplog.go:90] GET /api/v1/services: (1.039269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.334654  110612 httplog.go:90] POST /api/v1/namespaces: (1.637376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42150]
I0814 11:27:41.336526  110612 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.169707ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.338696  110612 httplog.go:90] POST /api/v1/namespaces: (1.752048ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.340104  110612 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.014508ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.342276  110612 httplog.go:90] POST /api/v1/namespaces: (1.778324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.419410  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.419452  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.419466  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.419477  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.419485  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.419520  110612 httplog.go:90] GET /healthz: (278.157µs) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:41.430968  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.431001  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.431015  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.431025  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.431033  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.431065  110612 httplog.go:90] GET /healthz: (256.564µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.519341  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.519380  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.519392  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.519403  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.519412  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.519442  110612 httplog.go:90] GET /healthz: (258.991µs) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:41.530953  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.530985  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.530999  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.531009  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.531019  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.531050  110612 httplog.go:90] GET /healthz: (264.229µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
E0814 11:27:41.563006  110612 factory.go:599] Error getting pod permit-plugind2fb2e1c-2213-4780-9a49-4c33e5f38c1d/test-pod for retry: Get http://127.0.0.1:36075/api/v1/namespaces/permit-plugind2fb2e1c-2213-4780-9a49-4c33e5f38c1d/pods/test-pod: dial tcp 127.0.0.1:36075: connect: connection refused; retrying...
I0814 11:27:41.619725  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.619772  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.619786  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.619796  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.619803  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.619836  110612 httplog.go:90] GET /healthz: (282.12µs) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:41.631666  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.631718  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.631732  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.631742  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.631750  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.631781  110612 httplog.go:90] GET /healthz: (275.511µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.719292  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.719331  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.719344  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.719355  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.719362  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.719392  110612 httplog.go:90] GET /healthz: (250.883µs) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:41.730944  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.730984  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.730998  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.731008  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.731015  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.731055  110612 httplog.go:90] GET /healthz: (290.99µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.819323  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.819360  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.819374  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.819384  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.819392  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.819425  110612 httplog.go:90] GET /healthz: (240.234µs) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:41.830948  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.830980  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.830993  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.831006  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.831015  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.831045  110612 httplog.go:90] GET /healthz: (276.902µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:41.919682  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.919743  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.919767  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.919778  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.919795  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.919854  110612 httplog.go:90] GET /healthz: (417.154µs) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:41.930921  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:41.930970  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:41.930984  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:41.930993  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:41.931002  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:41.931040  110612 httplog.go:90] GET /healthz: (280.981µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.019352  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:42.019383  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.019397  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:42.019408  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:42.019416  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:42.019445  110612 httplog.go:90] GET /healthz: (259.912µs) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:42.031476  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:42.031516  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.031530  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:42.031541  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:42.031549  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:42.031584  110612 httplog.go:90] GET /healthz: (271.779µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.119357  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:42.119397  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.119411  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:42.119421  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:42.119427  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:42.119461  110612 httplog.go:90] GET /healthz: (242.329µs) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:42.130945  110612 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 11:27:42.130981  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.130995  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:42.131005  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:42.131013  110612 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:42.131047  110612 httplog.go:90] GET /healthz: (266.916µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.199354  110612 client.go:354] parsed scheme: ""
I0814 11:27:42.199395  110612 client.go:354] scheme "" not registered, fallback to default scheme
I0814 11:27:42.199450  110612 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 11:27:42.199495  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:42.199966  110612 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 11:27:42.200045  110612 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 11:27:42.220495  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.220532  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:42.220546  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:42.220555  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:42.220600  110612 httplog.go:90] GET /healthz: (1.376106ms) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:42.231836  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.231924  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:42.231936  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:42.231946  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:42.231993  110612 httplog.go:90] GET /healthz: (1.213713ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.320204  110612 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.73763ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42154]
I0814 11:27:42.320268  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.390859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.322150  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.322178  110612 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 11:27:42.322189  110612 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 11:27:42.322197  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 11:27:42.322237  110612 httplog.go:90] GET /healthz: (1.744077ms) 0 [Go-http-client/1.1 127.0.0.1:42154]
I0814 11:27:42.322642  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.985664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42300]
I0814 11:27:42.322771  110612 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.845867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.323002  110612 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 11:27:42.324199  110612 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.225255ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42154]
I0814 11:27:42.325104  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.140221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42304]
I0814 11:27:42.326622  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.156755ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42304]
I0814 11:27:42.326663  110612 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.588904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42154]
I0814 11:27:42.326828  110612 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (3.670475ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.328291  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (973.619µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42154]
I0814 11:27:42.329336  110612 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.828054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.330290  110612 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 11:27:42.330311  110612 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0814 11:27:42.330337  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.64839ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42154]
I0814 11:27:42.331474  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.331496  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:42.331540  110612 httplog.go:90] GET /healthz: (859.972µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.331810  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (918.223µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42154]
I0814 11:27:42.333286  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (850.017µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.334512  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (825.768µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.335616  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (786.666µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.336558  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (665.749µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.338547  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.483013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.338753  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0814 11:27:42.340053  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (965.856µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.342905  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.314185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.343160  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0814 11:27:42.344240  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (771.287µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.346259  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.315177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.346751  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0814 11:27:42.348026  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (881.928µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.350345  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.964222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.350686  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0814 11:27:42.352339  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (974.728µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.354282  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.408304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.354611  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0814 11:27:42.355822  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (875.68µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.357927  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.659932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.358251  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0814 11:27:42.360278  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.691071ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.362147  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.500189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.362347  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0814 11:27:42.363382  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (812.646µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.365167  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.357994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.365477  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0814 11:27:42.366694  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (858.863µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.369279  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.801707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.369506  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0814 11:27:42.371000  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.289289ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.373300  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.935187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.373615  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0814 11:27:42.374677  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (723.189µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.376390  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.216787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.376517  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0814 11:27:42.377434  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (782.008µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.379208  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.277593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.379512  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0814 11:27:42.380936  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.143643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.383092  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.73014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.383349  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0814 11:27:42.384809  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.248012ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.386815  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.484897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.387155  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0814 11:27:42.388643  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.032654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.391620  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.286525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.391989  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0814 11:27:42.393944  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.425978ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.395809  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.466418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.396094  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0814 11:27:42.397484  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.212246ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.399557  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.567202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.399804  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0814 11:27:42.400885  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (869.935µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.402931  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.547816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.403148  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0814 11:27:42.404377  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (920.683µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.406567  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.575185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.406979  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 11:27:42.408625  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.35166ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.411505  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.740257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.411680  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0814 11:27:42.412670  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (840.073µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.414852  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.68047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.415046  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0814 11:27:42.416175  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (941.637µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.418375  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.688651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.418581  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0814 11:27:42.419642  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (870.994µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.420212  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.420432  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:42.421254  110612 httplog.go:90] GET /healthz: (1.751098ms) 0 [Go-http-client/1.1 127.0.0.1:42302]
I0814 11:27:42.422162  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.01787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.422392  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0814 11:27:42.423658  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.031934ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.425457  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.316175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.425657  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0814 11:27:42.426915  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (965.991µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.428717  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.415263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.429334  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0814 11:27:42.432459  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.433074  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:42.432704  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (3.130421ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.433488  110612 httplog.go:90] GET /healthz: (2.643556ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.436509  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.221358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.436973  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0814 11:27:42.439259  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.090035ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.442340  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.361528ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.442996  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0814 11:27:42.445445  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.793911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.448095  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.769327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.448315  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 11:27:42.449815  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.260512ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.452485  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.728224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.452786  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 11:27:42.454319  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.116348ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.456555  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.643401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.456904  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 11:27:42.458081  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (975.424µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.460345  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.8388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.460759  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 11:27:42.462001  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (942.991µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.464868  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.326099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.465099  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 11:27:42.466186  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (872.873µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.470222  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.539793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.470635  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 11:27:42.472026  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.191816ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.474304  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.717541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.474527  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 11:27:42.476055  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.28663ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.478263  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.611956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.478617  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 11:27:42.479760  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (959.422µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.481933  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.679896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.482189  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 11:27:42.483358  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (886.911µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.485424  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.640548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.485738  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 11:27:42.486941  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (968.725µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.489835  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.337506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.490347  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0814 11:27:42.491606  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.070033ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.493634  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.638378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.493794  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 11:27:42.495042  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (934.185µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.497583  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.077676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.497835  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0814 11:27:42.499206  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.102915ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.501311  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.569557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.501939  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 11:27:42.503152  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.0235ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.505111  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.570103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.505424  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 11:27:42.508588  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.437057ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.511231  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.045253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.511666  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 11:27:42.513117  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.088287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.515431  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.715745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.515825  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 11:27:42.517323  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.111589ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.519574  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.619113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.519876  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 11:27:42.521577  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.521609  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:42.521647  110612 httplog.go:90] GET /healthz: (2.450104ms) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:42.521996  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.566192ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.524080  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.708344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.524425  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0814 11:27:42.525738  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.104686ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.527950  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.699072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.528143  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 11:27:42.529502  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.057981ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.531351  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.432941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.531545  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0814 11:27:42.531960  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.531980  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:42.532027  110612 httplog.go:90] GET /healthz: (965.824µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.533196  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.492308ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.535563  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.979496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.535952  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 11:27:42.537140  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (956.096µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.539397  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.826735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.539591  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 11:27:42.540746  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (937µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.543282  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.144276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.543625  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 11:27:42.560317  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.731017ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.581273  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.740378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.581533  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 11:27:42.600420  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.770533ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.620386  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.620428  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:42.620601  110612 httplog.go:90] GET /healthz: (1.404232ms) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:42.621981  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.549307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.622624  110612 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 11:27:42.632278  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.632306  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:42.632346  110612 httplog.go:90] GET /healthz: (1.501853ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.639949  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.459291ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.664361  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.872882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.664620  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0814 11:27:42.680089  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.590573ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.701394  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.914048ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.701703  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0814 11:27:42.720149  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.720189  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:42.720230  110612 httplog.go:90] GET /healthz: (1.136265ms) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:42.720647  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (2.141485ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.732499  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.732604  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:42.732763  110612 httplog.go:90] GET /healthz: (1.948671ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.741560  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.019312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.742310  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0814 11:27:42.760463  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.461934ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.781218  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.667426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.781585  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0814 11:27:42.800005  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.538199ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.820258  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.820300  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:42.820378  110612 httplog.go:90] GET /healthz: (1.261742ms) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:42.821049  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.565244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.821313  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0814 11:27:42.831658  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.831683  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:42.831733  110612 httplog.go:90] GET /healthz: (909.976µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.839765  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.350186ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.860744  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.119744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.861237  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 11:27:42.880262  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.55082ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.903038  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.712946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.903319  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0814 11:27:42.920090  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.594609ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:42.920136  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.920162  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:42.920202  110612 httplog.go:90] GET /healthz: (1.105505ms) 0 [Go-http-client/1.1 127.0.0.1:42302]
I0814 11:27:42.932955  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:42.933144  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:42.933454  110612 httplog.go:90] GET /healthz: (2.6106ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.941310  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.855219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.941936  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0814 11:27:42.960173  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.67894ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.981573  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.010629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:42.981927  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0814 11:27:42.999571  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.10132ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.021265  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.021295  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.021342  110612 httplog.go:90] GET /healthz: (2.16048ms) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:43.021643  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.992982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.023385  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0814 11:27:43.038620  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.038787  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.038835  110612 httplog.go:90] GET /healthz: (7.72982ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.040439  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.92177ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.060633  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.119203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.061415  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 11:27:43.080396  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.917775ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.100930  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.346156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.101527  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 11:27:43.120097  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.120135  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.120213  110612 httplog.go:90] GET /healthz: (1.144537ms) 0 [Go-http-client/1.1 127.0.0.1:42302]
I0814 11:27:43.121159  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (2.666453ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.132245  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.132278  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.132333  110612 httplog.go:90] GET /healthz: (1.558587ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.140670  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.143927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.141006  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 11:27:43.159932  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.187163ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.188931  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (10.467237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.190442  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 11:27:43.200067  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.563543ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.221310  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.797454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.221728  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 11:27:43.223285  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.223316  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.223355  110612 httplog.go:90] GET /healthz: (2.723794ms) 0 [Go-http-client/1.1 127.0.0.1:42302]
I0814 11:27:43.243169  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.243203  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.243254  110612 httplog.go:90] GET /healthz: (5.201714ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.244479  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (2.598091ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.260431  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.941293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.260668  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 11:27:43.279881  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.38282ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.300968  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.465767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.301237  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 11:27:43.320011  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.320046  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.320087  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.626408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.320086  110612 httplog.go:90] GET /healthz: (954.218µs) 0 [Go-http-client/1.1 127.0.0.1:42302]
I0814 11:27:43.332290  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.332324  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.332373  110612 httplog.go:90] GET /healthz: (1.182442ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.340627  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.136203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.340989  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 11:27:43.359815  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.339045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.380579  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.035282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.380884  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 11:27:43.399735  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.257305ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.420749  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.236914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.420983  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 11:27:43.421012  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.421035  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.421078  110612 httplog.go:90] GET /healthz: (1.079602ms) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:43.431675  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.431708  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.431750  110612 httplog.go:90] GET /healthz: (986.087µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.440038  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.58985ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.460696  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.21592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.461044  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0814 11:27:43.480632  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.329187ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.500976  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.448285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.501280  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 11:27:43.519985  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.483285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.520156  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.520175  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.520204  110612 httplog.go:90] GET /healthz: (1.131868ms) 0 [Go-http-client/1.1 127.0.0.1:42302]
I0814 11:27:43.533440  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.533475  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.533518  110612 httplog.go:90] GET /healthz: (1.174297ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.540538  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.078176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.540925  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0814 11:27:43.560099  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.551675ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.580785  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.317686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.581163  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 11:27:43.600158  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.542558ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.620196  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.620235  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.620272  110612 httplog.go:90] GET /healthz: (916.634µs) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:43.620456  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.987237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.620773  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 11:27:43.633145  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.633175  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.633314  110612 httplog.go:90] GET /healthz: (1.156936ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.639991  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.384849ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.660912  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.394057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.661162  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 11:27:43.680020  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.526795ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.703391  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.835972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.703815  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 11:27:43.719952  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.477173ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.720804  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.720831  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.720909  110612 httplog.go:90] GET /healthz: (1.42725ms) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:43.731726  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.731758  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.731810  110612 httplog.go:90] GET /healthz: (965.533µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.740721  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.191898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.741008  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 11:27:43.760112  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.559168ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.784265  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.691892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.784533  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0814 11:27:43.800004  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.479937ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.820671  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.158893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:43.821341  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.821377  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.821412  110612 httplog.go:90] GET /healthz: (1.877704ms) 0 [Go-http-client/1.1 127.0.0.1:42302]
I0814 11:27:43.821456  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 11:27:43.832061  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.832090  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.832130  110612 httplog.go:90] GET /healthz: (850.935µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.839930  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.521911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.860756  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.260365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.861322  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0814 11:27:43.880727  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (2.134629ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.900684  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.221844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.901181  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 11:27:43.920295  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.920350  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.920387  110612 httplog.go:90] GET /healthz: (1.107799ms) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:43.920826  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.751712ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.932565  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:43.932599  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:43.932644  110612 httplog.go:90] GET /healthz: (1.202705ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.940233  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.774133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.940665  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 11:27:43.960083  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.580247ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.980574  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.065059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:43.980990  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 11:27:44.000071  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.543812ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.020192  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:44.020425  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:44.020654  110612 httplog.go:90] GET /healthz: (1.496304ms) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:44.021429  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.901549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.021689  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 11:27:44.032739  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:44.032777  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:44.032825  110612 httplog.go:90] GET /healthz: (2.032304ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.040351  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.810838ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.060596  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.978392ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.062165  110612 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 11:27:44.080135  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.652434ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.082029  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.27347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.101109  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.537136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.101687  110612 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0814 11:27:44.120305  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:44.120343  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:44.120388  110612 httplog.go:90] GET /healthz: (1.288759ms) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:44.120515  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.964315ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.122685  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.751376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.131609  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:44.131648  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:44.131701  110612 httplog.go:90] GET /healthz: (915.284µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.140460  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.982905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.140674  110612 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 11:27:44.160420  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.449468ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.162196  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.200161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.181190  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.643785ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.181441  110612 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 11:27:44.200028  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.552892ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.201905  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.367407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.220772  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:44.220811  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:44.220874  110612 httplog.go:90] GET /healthz: (1.743822ms) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:44.221207  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.70139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.221513  110612 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 11:27:44.231666  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:44.231698  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:44.231750  110612 httplog.go:90] GET /healthz: (972.145µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.241193  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (2.548552ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.243414  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.758613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.260633  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.08679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.260930  110612 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 11:27:44.282140  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.56298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.284496  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.908509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.301276  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.764017ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.301552  110612 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 11:27:44.319774  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.241817ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.321404  110612 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.166464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.321525  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:44.321559  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:44.321595  110612 httplog.go:90] GET /healthz: (748.942µs) 0 [Go-http-client/1.1 127.0.0.1:42152]
I0814 11:27:44.331818  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:44.331868  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:44.331919  110612 httplog.go:90] GET /healthz: (1.167916ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.342419  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.851996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.342683  110612 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 11:27:44.360118  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.599584ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.362209  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.471849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.380818  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.286325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.381072  110612 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0814 11:27:44.404320  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (5.869266ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.407745  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.896707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.420422  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:44.420798  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:44.421069  110612 httplog.go:90] GET /healthz: (1.901113ms) 0 [Go-http-client/1.1 127.0.0.1:42302]
I0814 11:27:44.420480  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.065443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.421604  110612 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 11:27:44.432039  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:44.432068  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:44.432107  110612 httplog.go:90] GET /healthz: (1.048205ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.439617  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.216862ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.441356  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.187556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.461618  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.777735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.461952  110612 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 11:27:44.479550  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.092098ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.481486  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.430838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.500549  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.008256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.501011  110612 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 11:27:44.520214  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.692513ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.521808  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:44.521855  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:44.521898  110612 httplog.go:90] GET /healthz: (2.307875ms) 0 [Go-http-client/1.1 127.0.0.1:42302]
I0814 11:27:44.522944  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.300507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.532081  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:44.532119  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:44.532183  110612 httplog.go:90] GET /healthz: (1.162696ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.540922  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.4125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.541230  110612 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 11:27:44.559827  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.299176ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.561342  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.044815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.584527  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (6.041697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.585023  110612 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 11:27:44.599732  110612 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.222079ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.601595  110612 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.305291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.620414  110612 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 11:27:44.620444  110612 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 11:27:44.620475  110612 httplog.go:90] GET /healthz: (1.348557ms) 0 [Go-http-client/1.1 127.0.0.1:42302]
I0814 11:27:44.620660  110612 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.173324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.620970  110612 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 11:27:44.632208  110612 httplog.go:90] GET /healthz: (948.362µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.634129  110612 httplog.go:90] GET /api/v1/namespaces/default: (1.365206ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.636542  110612 httplog.go:90] POST /api/v1/namespaces: (1.897417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.638469  110612 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.504128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.642965  110612 httplog.go:90] POST /api/v1/namespaces/default/services: (4.060593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.645089  110612 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.742461ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.648502  110612 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.959985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.724339  110612 httplog.go:90] GET /healthz: (5.049474ms) 200 [Go-http-client/1.1 127.0.0.1:42302]
W0814 11:27:44.725823  110612 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:27:44.725869  110612 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:27:44.725891  110612 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:27:44.725902  110612 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:27:44.725920  110612 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:27:44.725930  110612 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:27:44.725942  110612 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:27:44.725961  110612 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:27:44.725971  110612 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:27:44.726030  110612 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 11:27:44.726042  110612 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 11:27:44.726063  110612 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0814 11:27:44.726076  110612 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0814 11:27:44.726682  110612 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.726713  110612 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.726760  110612 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.726778  110612 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.727111  110612 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.727127  110612 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.727141  110612 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.727158  110612 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.727556  110612 reflector.go:122] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.727572  110612 reflector.go:160] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.727941  110612 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.727956  110612 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.728297  110612 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.728310  110612 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.728361  110612 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.728378  110612 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.728430  110612 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.728442  110612 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.728766  110612 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.728783  110612 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.730140  110612 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (577.809µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42548]
I0814 11:27:44.730270  110612 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (461.386µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
I0814 11:27:44.730271  110612 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (829.204µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:27:44.730486  110612 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (468.587µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42540]
I0814 11:27:44.730687  110612 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (419.64µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42542]
I0814 11:27:44.730984  110612 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (391.632µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42534]
I0814 11:27:44.730978  110612 get.go:250] Starting watch for /api/v1/pods, rv=29654 labels= fields= timeout=5m32s
I0814 11:27:44.731200  110612 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (421.373µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42544]
I0814 11:27:44.731636  110612 get.go:250] Starting watch for /api/v1/nodes, rv=29654 labels= fields= timeout=6m13s
I0814 11:27:44.731651  110612 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (462.429µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42532]
I0814 11:27:44.731703  110612 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (398.16µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42546]
I0814 11:27:44.731973  110612 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=29654 labels= fields= timeout=6m24s
I0814 11:27:44.732059  110612 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=29654 labels= fields= timeout=6m48s
I0814 11:27:44.732466  110612 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=29654 labels= fields= timeout=5m18s
I0814 11:27:44.732471  110612 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=29654 labels= fields= timeout=8m51s
I0814 11:27:44.732568  110612 get.go:250] Starting watch for /api/v1/services, rv=29959 labels= fields= timeout=7m18s
I0814 11:27:44.732665  110612 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=29654 labels= fields= timeout=9m9s
I0814 11:27:44.732998  110612 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=29654 labels= fields= timeout=8m14s
I0814 11:27:44.733298  110612 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.733313  110612 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0814 11:27:44.734172  110612 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (397.216µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42550]
I0814 11:27:44.734862  110612 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=29654 labels= fields= timeout=7m49s
I0814 11:27:44.735053  110612 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (470.528µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42538]
I0814 11:27:44.735643  110612 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=29654 labels= fields= timeout=5m3s
I0814 11:27:44.826683  110612 shared_informer.go:211] caches populated
I0814 11:27:44.926982  110612 shared_informer.go:211] caches populated
I0814 11:27:45.027454  110612 shared_informer.go:211] caches populated
I0814 11:27:45.127682  110612 shared_informer.go:211] caches populated
I0814 11:27:45.227919  110612 shared_informer.go:211] caches populated
I0814 11:27:45.328134  110612 shared_informer.go:211] caches populated
I0814 11:27:45.428368  110612 shared_informer.go:211] caches populated
I0814 11:27:45.528597  110612 shared_informer.go:211] caches populated
I0814 11:27:45.629315  110612 shared_informer.go:211] caches populated
I0814 11:27:45.729527  110612 shared_informer.go:211] caches populated
I0814 11:27:45.730729  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:45.730768  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:45.731023  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:45.731300  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:45.731626  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:45.732141  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:45.735488  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:45.829778  110612 shared_informer.go:211] caches populated
I0814 11:27:45.929977  110612 shared_informer.go:211] caches populated
I0814 11:27:45.935146  110612 httplog.go:90] POST /api/v1/nodes: (4.496721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:27:45.935397  110612 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0814 11:27:45.940121  110612 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods: (2.283017ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:27:45.940572  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/waiting-pod
I0814 11:27:45.940604  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/waiting-pod
I0814 11:27:45.940766  110612 scheduler_binder.go:256] AssumePodVolumes for pod "preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/waiting-pod", node "test-node-0"
I0814 11:27:45.940784  110612 scheduler_binder.go:266] AssumePodVolumes for pod "preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/waiting-pod", node "test-node-0": all PVCs bound and nothing to do
I0814 11:27:45.940875  110612 framework.go:562] waiting for 30s for pod "waiting-pod" at permit
I0814 11:27:45.944578  110612 factory.go:615] Attempting to bind signalling-pod to test-node-1
I0814 11:27:45.945000  110612 factory.go:615] Attempting to bind waiting-pod to test-node-0
I0814 11:27:45.945818  110612 scheduler.go:447] Failed to bind pod: permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/signalling-pod
E0814 11:27:45.945898  110612 scheduler.go:449] scheduler cache ForgetPod failed: pod ee66cd77-93da-4dec-8489-33cd465d9eb0 wasn't assumed so cannot be forgotten
E0814 11:27:45.945922  110612 scheduler.go:605] error binding pod: Post http://127.0.0.1:39619/api/v1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/pods/signalling-pod/binding: dial tcp 127.0.0.1:39619: connect: connection refused
E0814 11:27:45.945949  110612 factory.go:566] Error scheduling permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/signalling-pod: Post http://127.0.0.1:39619/api/v1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/pods/signalling-pod/binding: dial tcp 127.0.0.1:39619: connect: connection refused; retrying
I0814 11:27:45.945988  110612 factory.go:624] Updating pod condition for permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0814 11:27:45.947055  110612 scheduler.go:280] Error updating the condition of the pod permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/signalling-pod: Put http://127.0.0.1:39619/api/v1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/pods/signalling-pod/status: dial tcp 127.0.0.1:39619: connect: connection refused
E0814 11:27:45.947052  110612 factory.go:599] Error getting pod permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/signalling-pod for retry: Get http://127.0.0.1:39619/api/v1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/pods/signalling-pod: dial tcp 127.0.0.1:39619: connect: connection refused; retrying...
E0814 11:27:45.947079  110612 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39619/apis/events.k8s.io/v1beta1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/events: dial tcp 127.0.0.1:39619: connect: connection refused' (may retry after sleeping)
I0814 11:27:45.948554  110612 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/waiting-pod/binding: (2.823555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:27:45.948798  110612 scheduler.go:614] pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/waiting-pod is bound successfully on node "test-node-0", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>.".
I0814 11:27:45.951659  110612 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/events: (2.447734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
E0814 11:27:46.147559  110612 factory.go:599] Error getting pod permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/signalling-pod for retry: Get http://127.0.0.1:39619/api/v1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/pods/signalling-pod: dial tcp 127.0.0.1:39619: connect: connection refused; retrying...
E0814 11:27:46.548205  110612 factory.go:599] Error getting pod permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/signalling-pod for retry: Get http://127.0.0.1:39619/api/v1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/pods/signalling-pod: dial tcp 127.0.0.1:39619: connect: connection refused; retrying...
I0814 11:27:46.730906  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:46.730923  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:46.731247  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:46.731464  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:46.731784  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:46.732349  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:46.735679  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:27:47.348762  110612 factory.go:599] Error getting pod permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/signalling-pod for retry: Get http://127.0.0.1:39619/api/v1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/pods/signalling-pod: dial tcp 127.0.0.1:39619: connect: connection refused; retrying...
I0814 11:27:47.731088  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:47.731146  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:47.731398  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:47.731611  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:47.731894  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:47.732504  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:47.735876  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:48.731276  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:48.731325  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:48.731679  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:48.731761  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:48.732051  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:48.732652  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:48.736053  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:27:48.949610  110612 factory.go:599] Error getting pod permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/signalling-pod for retry: Get http://127.0.0.1:39619/api/v1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/pods/signalling-pod: dial tcp 127.0.0.1:39619: connect: connection refused; retrying...
I0814 11:27:49.731473  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:49.731473  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:49.731901  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:49.732034  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:49.732216  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:49.732821  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:49.736280  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:50.731699  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:50.731701  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:50.732044  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:50.732165  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:50.732423  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:50.733026  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:50.736414  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:51.731922  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:51.731919  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:51.732242  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:51.732291  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:51.732516  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:51.733216  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:51.736598  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:27:51.932521  110612 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36075/apis/events.k8s.io/v1beta1/namespaces/permit-plugind2fb2e1c-2213-4780-9a49-4c33e5f38c1d/events: dial tcp 127.0.0.1:36075: connect: connection refused' (may retry after sleeping)
E0814 11:27:52.150249  110612 factory.go:599] Error getting pod permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/signalling-pod for retry: Get http://127.0.0.1:39619/api/v1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/pods/signalling-pod: dial tcp 127.0.0.1:39619: connect: connection refused; retrying...
I0814 11:27:52.732127  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:52.732212  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:52.732592  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:52.732669  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:52.732741  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:52.733612  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:52.736938  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:53.732380  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:53.732488  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:53.732685  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:53.732812  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:53.732945  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:53.734719  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:53.737348  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:27:54.363627  110612 factory.go:599] Error getting pod permit-plugind2fb2e1c-2213-4780-9a49-4c33e5f38c1d/test-pod for retry: Get http://127.0.0.1:36075/api/v1/namespaces/permit-plugind2fb2e1c-2213-4780-9a49-4c33e5f38c1d/pods/test-pod: dial tcp 127.0.0.1:36075: connect: connection refused; retrying...
I0814 11:27:54.634632  110612 httplog.go:90] GET /api/v1/namespaces/default: (1.723729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:27:54.636327  110612 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.303805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:27:54.637762  110612 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.021237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:27:54.733048  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:54.733089  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:54.733048  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:54.733069  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:54.733240  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:54.734933  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:54.737539  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:55.733240  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:55.733255  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:55.733357  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:55.733427  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:55.733479  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:55.735064  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:55.737697  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:56.733401  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:56.733476  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:56.733401  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:56.733622  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:56.733705  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:56.735332  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:56.737862  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:57.733593  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:57.733683  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:57.733708  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:57.733750  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:57.733864  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:57.735575  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:57.738058  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:27:58.431204  110612 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39619/apis/events.k8s.io/v1beta1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/events: dial tcp 127.0.0.1:39619: connect: connection refused' (may retry after sleeping)
E0814 11:27:58.550836  110612 factory.go:599] Error getting pod permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/signalling-pod for retry: Get http://127.0.0.1:39619/api/v1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/pods/signalling-pod: dial tcp 127.0.0.1:39619: connect: connection refused; retrying...
I0814 11:27:58.733864  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:58.734039  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:58.734093  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:58.734144  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:58.734237  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:58.735828  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:58.738745  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:59.734092  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:59.734324  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:59.734386  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:59.734402  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:59.734442  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:59.736046  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:27:59.738923  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:00.734289  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:00.734615  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:00.734772  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:00.734775  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:00.734792  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:00.736205  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:00.739091  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:01.734961  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:01.735002  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:01.735027  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:01.735054  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:01.735502  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:01.736359  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:01.739263  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:02.735180  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:02.735233  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:02.735278  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:02.735918  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:02.736687  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:02.737225  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:02.739444  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:28:02.895129  110612 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36075/apis/events.k8s.io/v1beta1/namespaces/permit-plugind2fb2e1c-2213-4780-9a49-4c33e5f38c1d/events: dial tcp 127.0.0.1:36075: connect: connection refused' (may retry after sleeping)
I0814 11:28:03.735351  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:03.735400  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:03.735404  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:03.736017  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:03.737419  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:03.737502  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:03.739597  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:04.635756  110612 httplog.go:90] GET /api/v1/namespaces/default: (2.751395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:28:04.638282  110612 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.096021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:28:04.643115  110612 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (4.329568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:28:04.735559  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:04.735605  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:04.735619  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:04.736267  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:04.737961  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:04.738045  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:04.739756  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:05.735726  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:05.735778  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:05.735791  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:05.736458  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:05.738081  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:05.738217  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:05.739899  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:06.735975  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:06.736033  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:06.736047  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:06.736624  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:06.738238  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:06.738372  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:06.740364  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:07.736171  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:07.736171  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:07.736191  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:07.736795  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:07.738354  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:07.738505  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:07.741299  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:08.736797  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:08.736815  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:08.736794  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:08.737126  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:08.738538  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:08.738819  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:08.741427  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:28:09.530643  110612 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39619/apis/events.k8s.io/v1beta1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/events: dial tcp 127.0.0.1:39619: connect: connection refused' (may retry after sleeping)
I0814 11:28:09.736992  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:09.737051  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:09.737072  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:09.737279  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:09.738689  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:09.739028  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:09.741548  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:10.737181  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:10.737272  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:10.737318  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:10.737448  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:10.738946  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:10.739182  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:10.741735  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:28:11.351659  110612 factory.go:599] Error getting pod permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/signalling-pod for retry: Get http://127.0.0.1:39619/api/v1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/pods/signalling-pod: dial tcp 127.0.0.1:39619: connect: connection refused; retrying...
I0814 11:28:11.737446  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:11.737446  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:11.737554  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:11.737643  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:11.739166  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:11.739336  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:11.742076  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:12.737698  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:12.737716  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:12.737756  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:12.738097  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:12.739309  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:12.739493  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:12.742249  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:13.737971  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:13.738002  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:13.737983  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:13.738292  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:13.739491  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:13.739603  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:13.742382  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:14.635056  110612 httplog.go:90] GET /api/v1/namespaces/default: (1.81966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:28:14.636899  110612 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.408674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:28:14.639463  110612 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.181716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:28:14.738086  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:14.738423  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:14.739651  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:14.739737  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:14.740669  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:14.740732  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:14.742512  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:28:14.995746  110612 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36075/apis/events.k8s.io/v1beta1/namespaces/permit-plugind2fb2e1c-2213-4780-9a49-4c33e5f38c1d/events: dial tcp 127.0.0.1:36075: connect: connection refused' (may retry after sleeping)
I0814 11:28:15.738298  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:15.738577  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:15.739811  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:15.739923  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:15.740951  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:15.741043  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:15.742692  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:15.943899  110612 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods: (2.585922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:28:15.944144  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:15.944178  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:15.944307  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:15.944352  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:15.946298  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.32777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:15.946967  110612 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/events: (1.969629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:15.947195  110612 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod/status: (2.209696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0814 11:28:15.948754  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.097127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:15.949059  110612 generic_scheduler.go:1191] Node test-node-0 is a potential node for preemption.
I0814 11:28:15.951567  110612 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod/status: (2.107129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:15.954407  110612 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/waiting-pod: (2.372673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:15.957062  110612 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/events: (1.870848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:16.046542  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.846668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:16.146558  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.895708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:16.246761  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.07626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:16.346663  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.935328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:16.447697  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.990575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:16.546394  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.700712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:16.646817  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.097521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:16.738517  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:16.738744  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:16.740016  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:16.740032  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:16.741085  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:16.741176  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:16.742876  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:16.746270  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.616318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:16.846901  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.219021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:16.946648  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.981568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:17.046626  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.997382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:17.146811  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.090636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:17.246596  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.972141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:17.347128  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.407638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:17.446897  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.175512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:17.546964  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.264993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:17.646825  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.023445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:17.731472  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:17.731523  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:17.731813  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:17.731882  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:17.734572  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.363239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:17.734572  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.285383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:17.735204  110612 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/events: (2.404655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47278]
I0814 11:28:17.738706  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:17.739099  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:17.740191  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:17.740444  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:17.741263  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:17.741295  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:17.743045  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:17.746683  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.998041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:17.846509  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.839714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:17.946915  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.222066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:18.046972  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.264838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:18.146520  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.855628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:18.253911  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (9.169636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:18.346682  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.998517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:18.446730  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.016193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:18.546682  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.918775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:18.646801  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.963197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:18.739041  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:18.739270  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:18.739280  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:18.739293  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:18.739499  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:18.739554  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:18.740358  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:18.740697  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:18.741638  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:18.741669  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:18.743067  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.040926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:18.743215  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:18.744080  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.030595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:18.744166  110612 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/events/preemptor-pod.15bac685a2f95259: (3.004528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47282]
I0814 11:28:18.746269  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.647163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:18.846696  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.989302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:18.946695  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.012894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:19.047056  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.388663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:19.146790  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.048265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:19.246419  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.796422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:19.346526  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.863823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:19.446511  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.879831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:19.546105  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.481676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:19.646587  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.887233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:19.739289  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:19.739447  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:19.739480  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:19.739494  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:19.739706  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:19.739771  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:19.740491  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:19.740867  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:19.741797  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.711166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:19.741810  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.701964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:19.742170  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:19.742196  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:19.743382  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:19.746103  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.557816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:19.846908  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.216126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:19.946095  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.482986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
E0814 11:28:19.964340  110612 factory.go:599] Error getting pod permit-plugind2fb2e1c-2213-4780-9a49-4c33e5f38c1d/test-pod for retry: Get http://127.0.0.1:36075/api/v1/namespaces/permit-plugind2fb2e1c-2213-4780-9a49-4c33e5f38c1d/pods/test-pod: dial tcp 127.0.0.1:36075: connect: connection refused; retrying...
I0814 11:28:20.046402  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.575474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:20.146520  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.815734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:20.247616  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.945794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:20.346870  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.213098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:20.447961  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.335399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:20.546635  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.94825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:20.646869  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.215593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:20.739566  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:20.739626  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:20.739744  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:20.739758  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:20.739917  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:20.739970  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:20.740785  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:20.741022  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:20.742342  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:20.742369  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:20.743555  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:20.745635  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (5.257708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:20.746288  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (5.990112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:20.753187  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.476011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47676]
E0814 11:28:20.763051  110612 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39619/apis/events.k8s.io/v1beta1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/events: dial tcp 127.0.0.1:39619: connect: connection refused' (may retry after sleeping)
I0814 11:28:20.846586  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.929925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:20.948914  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (4.304121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:21.047233  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.610836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:21.148370  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.691135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:21.245956  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.364368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:21.346176  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:21.446651  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.011783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:21.546466  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.816269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:21.646321  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.646026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:21.739770  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:21.739771  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:21.739950  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:21.739965  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:21.740115  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:21.740165  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:21.740951  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:21.741233  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:21.742279  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.785589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:21.742282  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.841132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:21.742495  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:21.742524  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:21.743690  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:21.745888  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.322849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:21.846472  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.8205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:21.946507  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.850741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:22.046245  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.63467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:22.146253  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.614578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:22.246094  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.522655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:22.346630  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.018529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:22.446492  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.915109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:22.546593  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.884517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:22.646340  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.722736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:22.739968  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:22.739986  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:22.740166  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:22.740188  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:22.740379  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:22.740433  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:22.741344  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:22.741355  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:22.742530  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.76302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:22.742558  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.634081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:22.742623  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:22.742643  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:22.743938  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:22.746103  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.525099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:22.847226  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.482202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:22.946776  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.047669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:23.047033  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.294443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:23.147762  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.16188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:23.246691  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.003523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:23.346737  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.014906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:23.446615  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.004843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:23.546101  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.500284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:23.646203  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.552088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:23.740175  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:23.740179  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:23.740349  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:23.740376  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:23.740543  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:23.740596  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:23.741482  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:23.741491  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:23.742498  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.481339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:23.742551  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.631479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:23.742926  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:23.742958  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:23.744092  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:23.745687  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.185616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:23.846597  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.953894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:23.946544  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.893567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:24.046648  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.991286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:24.146296  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.633959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:24.246735  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.053755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:24.346894  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.230041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:24.446277  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.657499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:24.546816  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.034388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:24.634793  110612 httplog.go:90] GET /api/v1/namespaces/default: (1.482095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:24.636186  110612 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.070155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:24.637608  110612 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.060017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:24.646241  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.603427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:24.740449  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:24.740690  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:24.740907  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:24.740923  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:24.741059  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:24.741109  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:24.742365  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:24.742550  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:24.743630  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.541546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:24.743950  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:24.743984  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:24.744206  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:24.745779  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.26937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:24.746594  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (5.072003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:24.846444  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.837388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:24.946390  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.775195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:25.046628  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.98427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:25.146313  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.705326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:25.246512  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.84837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:25.346530  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.830079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:25.446461  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.777145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
E0814 11:28:25.486134  110612 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36075/apis/events.k8s.io/v1beta1/namespaces/permit-plugind2fb2e1c-2213-4780-9a49-4c33e5f38c1d/events: dial tcp 127.0.0.1:36075: connect: connection refused' (may retry after sleeping)
I0814 11:28:25.546540  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.910869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:25.646320  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.738533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:25.740635  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:25.740911  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:25.741070  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:25.741089  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:25.741282  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:25.741347  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:25.742534  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:25.742661  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:25.743694  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.922085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:25.743726  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.953564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:25.744094  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:25.744112  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:25.744319  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:25.745855  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.371011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:25.846418  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.741526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:25.946414  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.845767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:26.046412  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.813561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:26.146567  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.963104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:26.246556  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.970762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:26.346833  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.101281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:26.446568  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.933202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:26.547022  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.330355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:26.646641  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.023541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:26.740805  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:26.741075  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:26.741189  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:26.741200  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:26.741326  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:26.741370  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:26.742684  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:26.742788  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:26.743262  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.473527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:26.743265  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.658365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:26.744425  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:26.744450  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:26.744480  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:26.746234  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.670518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:26.846080  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.486092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:26.946507  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.880015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:27.046950  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.166158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:27.146612  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.960746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:27.247121  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.402605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:27.347117  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.397139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:27.446933  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.231266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:27.546455  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.799861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:27.646479  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.700036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:27.740997  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:27.741261  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:27.741419  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:27.741438  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:27.741618  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:27.741684  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:27.742879  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:27.742892  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:27.744166  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.01428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:27.744547  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.453841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:27.745118  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:27.745150  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:27.745163  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:27.746501  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.342889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:27.846944  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.222726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:27.947277  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.532555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:28.046774  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.075151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:28.146739  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.026752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:28.246726  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.052881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:28.346505  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.905971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:28.446606  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.887343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:28.546775  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.059752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:28.647036  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.368488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:28.741190  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:28.741467  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:28.741624  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:28.741638  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:28.741796  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:28.741898  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:28.743001  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:28.743214  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:28.744157  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.663199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:28.744666  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.637288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:28.745245  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:28.745277  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:28.745291  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:28.746482  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.935472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:28.846755  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.013184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:28.946789  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.016271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:29.046570  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.96911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:29.146620  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.938615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:29.246578  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.860923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:29.346917  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.156744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:29.447032  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.299221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:29.547519  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.752635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:29.647291  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.253371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:29.741399  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:29.741584  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:29.741959  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:29.741986  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:29.742161  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:29.742217  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:29.743159  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:29.743340  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:29.745738  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.182567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:29.745743  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.682857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:29.746247  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.343095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51140]
I0814 11:28:29.746286  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:29.746294  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:29.746317  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:29.847096  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.397145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:29.946372  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.7317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:30.046333  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.645354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:30.146741  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.059541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:30.246233  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.568638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:30.346621  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.959639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:30.446579  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.927838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:30.546787  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.141183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:30.646621  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.862387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:30.741825  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:30.741825  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:30.742059  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:30.742075  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:30.742211  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:30.742289  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:30.743317  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:30.743544  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:30.744596  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.998145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:30.745004  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.329715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:30.746402  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:30.746405  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:30.746432  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:30.747205  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.835071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0814 11:28:30.846990  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.197484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:30.946719  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.031763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:31.047021  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.274565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:31.146759  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.110838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:31.246255  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.614399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:31.346329  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.706161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:31.447040  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.219424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:31.546642  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.853009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:31.646679  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.883369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:31.742044  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:31.742071  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:31.742326  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:31.742349  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:31.742510  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:31.742577  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:31.743484  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:31.743717  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:31.745278  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.121083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:31.745688  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.028321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:31.745922  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.090375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51966]
I0814 11:28:31.746556  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:31.746568  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:31.746589  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:31.846800  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.095092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:31.946729  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.013117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:32.047016  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.311245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:32.146586  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.91014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:32.247397  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.814188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
E0814 11:28:32.328161  110612 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39619/apis/events.k8s.io/v1beta1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/events: dial tcp 127.0.0.1:39619: connect: connection refused' (may retry after sleeping)
I0814 11:28:32.347105  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.076181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:32.446993  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.304752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:32.547107  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.401244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:32.647491  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.408662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:32.742276  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:32.742505  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:32.742524  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:32.742655  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:32.742694  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:32.743074  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:32.743934  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:32.744222  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:32.745359  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.173651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:32.745627  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.940688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:32.746196  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.371581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52372]
I0814 11:28:32.746892  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:32.747168  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:32.747217  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:32.846699  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.010989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:32.947125  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.459493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:33.067643  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (22.99728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:33.146782  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.181772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:33.246592  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.959031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:33.348553  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.971973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:33.446530  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.919378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:33.546581  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.817459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:33.648247  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.778701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:33.742475  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:33.742696  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:33.742714  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:33.742912  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:33.742952  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:33.743820  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:33.744078  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:33.744352  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:33.745787  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.374616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:33.746154  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.839871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:33.747053  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:33.747307  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:33.747342  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:33.747818  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.87882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52816]
I0814 11:28:33.846955  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.248259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:33.946948  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.319296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:34.049545  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (4.862133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:34.146570  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.934288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:34.246475  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.860214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:34.346874  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.164072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:34.447137  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.442919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:34.546498  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.707092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:34.635271  110612 httplog.go:90] GET /api/v1/namespaces/default: (1.798723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:34.637495  110612 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.723539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:34.639110  110612 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.152768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:34.646420  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.829527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:34.742724  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:34.742970  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:34.742989  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:34.743136  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:34.743190  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:34.743983  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:34.744255  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:34.744708  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:34.745682  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.169817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:34.745683  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.018446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:34.747219  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:34.747442  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:34.747554  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:34.747748  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.802962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53158]
I0814 11:28:34.848383  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.325584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:34.949052  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.724972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:35.047120  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.45218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:35.147106  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.425545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:35.246303  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.66739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:35.346543  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.858427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:35.448134  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.346285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:35.547174  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.405354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:35.646715  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:35.742956  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:35.743186  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:35.743206  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:35.743371  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:35.743423  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:35.744218  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:35.744511  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:35.744926  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:35.746566  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.526696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:35.747681  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.480929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0814 11:28:35.747729  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:35.747951  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:35.747971  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:35.748194  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.972015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51674]
I0814 11:28:35.848367  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.603677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0814 11:28:35.946921  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.122321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0814 11:28:36.047202  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.212976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0814 11:28:36.146909  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.053923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0814 11:28:36.246657  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.905231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0814 11:28:36.347181  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.901115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0814 11:28:36.447459  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.696369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0814 11:28:36.547382  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.59243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0814 11:28:36.646675  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.986188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0814 11:28:36.743184  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:36.743417  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:36.743443  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:36.743608  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:36.743688  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:36.744370  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:36.744641  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:36.745560  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:36.745774  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.632247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:36.746652  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.671677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0814 11:28:36.746652  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.612493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:36.747935  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:36.748129  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:36.748140  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:36.846613  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.945405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:36.946114  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.488884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
E0814 11:28:36.952273  110612 factory.go:599] Error getting pod permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/signalling-pod for retry: Get http://127.0.0.1:39619/api/v1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/pods/signalling-pod: dial tcp 127.0.0.1:39619: connect: connection refused; retrying...
I0814 11:28:37.046445  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.760629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:37.148691  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.749337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:37.246364  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.741668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:37.346738  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.133718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:37.446654  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.956287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:37.546944  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.24893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
E0814 11:28:37.570816  110612 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36075/apis/events.k8s.io/v1beta1/namespaces/permit-plugind2fb2e1c-2213-4780-9a49-4c33e5f38c1d/events: dial tcp 127.0.0.1:36075: connect: connection refused' (may retry after sleeping)
I0814 11:28:37.647041  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.170522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:37.743540  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:37.743711  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:37.743723  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:37.743894  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:37.743960  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:37.745463  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:37.745512  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:37.745705  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:37.747091  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.551281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:37.747947  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.540799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0814 11:28:37.747973  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.43162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0814 11:28:37.748099  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:37.748734  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:37.748834  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:37.846759  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.097255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0814 11:28:37.946489  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.805505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0814 11:28:38.046832  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.088957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0814 11:28:38.147212  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.406623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0814 11:28:38.246286  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.690534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0814 11:28:38.347516  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.86321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0814 11:28:38.446317  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.698889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0814 11:28:38.546147  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.483099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0814 11:28:38.646544  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.943022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0814 11:28:38.743767  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:38.744007  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:38.744030  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:38.744186  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:38.744293  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:38.745945  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:38.746046  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:38.746205  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:38.747739  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.156227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:38.748026  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.46468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0814 11:28:38.748044  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.024559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54686]
I0814 11:28:38.748471  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:38.748907  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:38.749020  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:38.848219  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.557044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54686]
I0814 11:28:38.946184  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.561292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54686]
I0814 11:28:39.046548  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.789919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54686]
I0814 11:28:39.146417  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.87686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54686]
I0814 11:28:39.246574  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.961768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54686]
I0814 11:28:39.348651  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (4.009488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54686]
I0814 11:28:39.446630  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.003791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54686]
I0814 11:28:39.547629  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.89775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54686]
I0814 11:28:39.646815  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.042576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54686]
I0814 11:28:39.743978  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:39.744125  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:39.744134  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:39.744326  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:39.744411  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:39.746099  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:39.746157  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:39.746335  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:39.748813  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.342283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54978]
I0814 11:28:39.748820  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (4.099091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54686]
I0814 11:28:39.748879  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (4.073817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:39.749093  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:39.749191  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:39.749214  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:39.846764  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.038083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:39.946656  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.863854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:40.047148  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.413688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:40.146603  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.904014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:40.246656  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.980846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:40.346907  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.914403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:40.447001  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.271728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:40.547662  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.839623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:40.647011  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.371637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:40.744193  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:40.744396  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:40.744420  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:40.744592  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:40.744642  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:40.746311  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:40.746334  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:40.746407  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.661928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:40.746481  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:40.747542  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.972286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:40.747775  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.367333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54686]
I0814 11:28:40.749319  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:40.749332  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:40.749387  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:40.847289  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.62077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:40.947086  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.107441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:41.046663  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.861758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:41.146277  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.613833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:41.246984  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.333362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:41.344621  110612 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.628427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:41.346122  110612 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.032421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:41.346178  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.609539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:41.347488  110612 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (961.38µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:41.446937  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.125637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:41.547281  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.500372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:41.647042  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.24217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:41.744421  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:41.744654  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:41.744682  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:41.744867  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:41.744934  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:41.746445  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:41.746498  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:41.746606  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:41.747538  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.549495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:41.747545  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.926524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55500]
I0814 11:28:41.747539  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.980288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:41.749498  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:41.749509  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:41.749532  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:41.846909  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.124921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:41.946809  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.057438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:42.046449  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.817623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:42.146680  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.891982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:42.246785  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.076147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:42.346931  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.143711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:42.447090  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.280785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:42.546766  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.006219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:42.646429  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.720995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:42.744679  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:42.744858  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:42.744875  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:42.745072  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:42.745131  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:42.746691  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:42.746793  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:42.747114  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.654567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:42.747120  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.900608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:42.747256  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:42.747964  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.068332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55680]
I0814 11:28:42.749679  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:42.749699  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:42.749712  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:42.846949  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.169606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:42.947096  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.282155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:43.046717  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.993034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:43.146531  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.782726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:43.246623  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.878487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:43.346937  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.096535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:43.447269  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.4836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:43.549135  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.38713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:43.646885  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.039342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:43.744943  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:43.745162  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:43.745215  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:43.745445  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:43.745516  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:43.746910  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:43.747015  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:43.747420  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:43.749444  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.650077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:43.749690  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.638619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:43.749874  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:43.750094  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:43.750124  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:43.750321  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (5.472191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55248]
I0814 11:28:43.847197  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.49064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:43.946929  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.219855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:44.046952  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.250347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:44.148241  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.17023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:44.246770  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.147211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:44.346793  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.034007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:44.448205  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.485268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:44.546442  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.767571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:44.634992  110612 httplog.go:90] GET /api/v1/namespaces/default: (1.493783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:44.636932  110612 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.467401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:44.639039  110612 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.733383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:44.647039  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.358566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:44.745100  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:44.745223  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:44.745232  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:44.745374  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:44.745409  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:44.747097  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:44.747240  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:44.747644  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:44.749350  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.723253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:44.749350  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (4.742895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:44.749630  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.57566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56236]
I0814 11:28:44.750029  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:44.750269  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:44.750462  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 11:28:44.781983  110612 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39619/apis/events.k8s.io/v1beta1/namespaces/permit-plugin7f5f29ac-3e8b-4cec-80ea-959a7de3da68/events: dial tcp 127.0.0.1:39619: connect: connection refused' (may retry after sleeping)
I0814 11:28:44.846544  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.739166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:44.946384  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.700223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.047169  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.421695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.146727  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.054285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.246582  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.915098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.346237  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.625739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.446749  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.07851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.547016  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.284536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.646613  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.996838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.745297  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:45.745496  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:45.745514  110612 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:45.745671  110612 factory.go:550] Unable to schedule preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 11:28:45.745752  110612 factory.go:624] Updating pod condition for preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 11:28:45.747342  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:45.747354  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:45.747830  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:45.748139  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.103125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:45.748154  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (3.470783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.748935  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.574149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56424]
I0814 11:28:45.750214  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:45.750455  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:45.750684  110612 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 11:28:45.846784  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (2.107448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:45.946639  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.95284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:45.948562  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.375115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:45.950816  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/waiting-pod: (1.778959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:45.957145  110612 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/waiting-pod: (5.719095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:45.961994  110612 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:45.962041  110612 scheduler.go:473] Skip schedule deleting pod: preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/preemptor-pod
I0814 11:28:45.966086  110612 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (8.365774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55922]
I0814 11:28:45.966788  110612 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/events: (4.276019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.969588  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/waiting-pod: (1.659895ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.974185  110612 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin25719e9d-5b95-4a78-978d-e707ed15819d/pods/preemptor-pod: (1.862503ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.975194  110612 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=29654&timeout=5m32s&timeoutSeconds=332&watch=true: (1m1.244523349s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0814 11:28:45.975195  110612 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=29654&timeout=6m13s&timeoutSeconds=373&watch=true: (1m1.244148737s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42302]
E0814 11:28:45.975485  110612 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0814 11:28:45.975522  110612 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=29654&timeout=6m24s&timeoutSeconds=384&watch=true: (1m1.243815005s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42534]
I0814 11:28:45.975551  110612 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=29654&timeout=6m48s&timeoutSeconds=408&watch=true: (1m1.243757825s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42548]
I0814 11:28:45.975675  110612 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=29654&timeout=5m18s&timeoutSeconds=318&watch=true: (1m1.243475104s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42542]
I0814 11:28:45.975714  110612 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=29654&timeout=8m51s&timeoutSeconds=531&watch=true: (1m1.243487095s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42544]
I0814 11:28:45.975877  110612 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=29959&timeout=7m18s&timeoutSeconds=438&watch=true: (1m1.243541864s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42546]
I0814 11:28:45.975893  110612 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=29654&timeout=9m9s&timeoutSeconds=549&watch=true: (1m1.243621132s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42532]
I0814 11:28:45.975886  110612 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=29654&timeout=8m14s&timeoutSeconds=494&watch=true: (1m1.243758275s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42540]
I0814 11:28:45.975950  110612 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=29654&timeout=7m49s&timeoutSeconds=469&watch=true: (1m1.241361485s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42550]
I0814 11:28:45.976248  110612 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=29654&timeout=5m3s&timeoutSeconds=303&watch=true: (1m1.240855132s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42538]
I0814 11:28:45.979891  110612 httplog.go:90] DELETE /api/v1/nodes: (4.579439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.980142  110612 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0814 11:28:45.981800  110612 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.350634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0814 11:28:45.984468  110612 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.118302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
--- FAIL: TestPreemptWithPermitPlugin (64.79s)
    framework_test.go:1618: Expected the preemptor pod to be scheduled. error: timed out waiting for the condition
    framework_test.go:1622: Expected the waiting pod to get preempted and deleted

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-112021.xml

Find permit-plugind2fb2e1c-2213-4780-9a49-4c33e5f38c1d/test-pod mentions in log files | View test history on testgrid


Show 2470 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 733 lines ...
W0814 11:15:06.435] W0814 11:15:06.434139   53109 controllermanager.go:527] Skipping "nodeipam"
W0814 11:15:06.435] I0814 11:15:06.434204   53109 certificate_controller.go:113] Starting certificate controller
W0814 11:15:06.436] I0814 11:15:06.434271   53109 controller_utils.go:1029] Waiting for caches to sync for certificate controller
W0814 11:15:06.436] I0814 11:15:06.434705   53109 controllermanager.go:535] Started "daemonset"
W0814 11:15:06.436] I0814 11:15:06.434925   53109 daemon_controller.go:267] Starting daemon sets controller
W0814 11:15:06.436] I0814 11:15:06.434956   53109 controller_utils.go:1029] Waiting for caches to sync for daemon sets controller
W0814 11:15:06.437] E0814 11:15:06.435311   53109 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0814 11:15:06.437] W0814 11:15:06.435339   53109 controllermanager.go:527] Skipping "service"
W0814 11:15:06.437] I0814 11:15:06.435866   53109 controllermanager.go:535] Started "job"
W0814 11:15:06.437] I0814 11:15:06.436040   53109 job_controller.go:143] Starting job controller
W0814 11:15:06.437] I0814 11:15:06.436068   53109 controller_utils.go:1029] Waiting for caches to sync for job controller
W0814 11:15:06.438] I0814 11:15:06.436367   53109 controllermanager.go:535] Started "replicationcontroller"
W0814 11:15:06.438] I0814 11:15:06.436520   53109 replica_set.go:182] Starting replicationcontroller controller
... skipping 32 lines ...
W0814 11:15:06.907] I0814 11:15:06.903722   53109 controllermanager.go:535] Started "resourcequota"
W0814 11:15:06.907] I0814 11:15:06.904128   53109 controllermanager.go:535] Started "csrcleaner"
W0814 11:15:06.907] I0814 11:15:06.904567   53109 controllermanager.go:535] Started "ttl"
W0814 11:15:06.907] I0814 11:15:06.905015   53109 node_lifecycle_controller.go:77] Sending events to api server
W0814 11:15:06.907] I0814 11:15:06.905037   53109 resource_quota_controller.go:271] Starting resource quota controller
W0814 11:15:06.908] I0814 11:15:06.905090   53109 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0814 11:15:06.908] E0814 11:15:06.905092   53109 core.go:175] failed to start cloud node lifecycle controller: no cloud provider provided
W0814 11:15:06.908] W0814 11:15:06.905103   53109 controllermanager.go:527] Skipping "cloud-node-lifecycle"
W0814 11:15:06.908] I0814 11:15:06.905116   53109 ttl_controller.go:116] Starting TTL controller
W0814 11:15:06.908] I0814 11:15:06.905027   53109 cleaner.go:81] Starting CSR cleaner controller
W0814 11:15:06.908] I0814 11:15:06.905204   53109 controller_utils.go:1029] Waiting for caches to sync for TTL controller
W0814 11:15:06.909] I0814 11:15:06.905146   53109 resource_quota_monitor.go:303] QuotaMonitor running
W0814 11:15:06.909] I0814 11:15:06.905601   53109 controllermanager.go:535] Started "pvc-protection"
... skipping 13 lines ...
W0814 11:15:06.911] I0814 11:15:06.908758   53109 controller_utils.go:1029] Waiting for caches to sync for ClusterRoleAggregator controller
W0814 11:15:06.935] I0814 11:15:06.935117   53109 controller_utils.go:1036] Caches are synced for certificate controller
W0814 11:15:07.006] I0814 11:15:07.005407   53109 controller_utils.go:1036] Caches are synced for TTL controller
W0814 11:15:07.007] I0814 11:15:07.006650   53109 controller_utils.go:1036] Caches are synced for GC controller
W0814 11:15:07.007] I0814 11:15:07.007397   53109 controller_utils.go:1036] Caches are synced for deployment controller
W0814 11:15:07.009] I0814 11:15:07.008995   53109 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0814 11:15:07.020] E0814 11:15:07.020101   53109 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0814 11:15:07.022] E0814 11:15:07.022141   53109 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0814 11:15:07.026] I0814 11:15:07.025926   53109 controller_utils.go:1036] Caches are synced for PV protection controller
W0814 11:15:07.033] I0814 11:15:07.032295   53109 controller_utils.go:1036] Caches are synced for endpoint controller
W0814 11:15:07.033] I0814 11:15:07.032522   53109 controller_utils.go:1036] Caches are synced for taint controller
W0814 11:15:07.033] I0814 11:15:07.032595   53109 taint_manager.go:186] Starting NoExecuteTaintManager
W0814 11:15:07.034] E0814 11:15:07.034192   53109 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0814 11:15:07.034] I0814 11:15:07.034334   53109 controller_utils.go:1036] Caches are synced for ReplicaSet controller
W0814 11:15:07.035] I0814 11:15:07.035092   53109 controller_utils.go:1036] Caches are synced for daemon sets controller
W0814 11:15:07.039] I0814 11:15:07.038723   53109 controller_utils.go:1036] Caches are synced for job controller
W0814 11:15:07.116] I0814 11:15:07.111540   53109 controller_utils.go:1036] Caches are synced for HPA controller
W0814 11:15:07.133] I0814 11:15:07.132375   53109 controller_utils.go:1036] Caches are synced for disruption controller
W0814 11:15:07.133] I0814 11:15:07.132434   53109 disruption.go:341] Sending events to api server.
W0814 11:15:07.137] I0814 11:15:07.136810   53109 controller_utils.go:1036] Caches are synced for ReplicationController controller
I0814 11:15:07.238] +++ [0814 11:15:07] On try 3, controller-manager: ok
W0814 11:15:07.339] I0814 11:15:07.323953   53109 controller_utils.go:1036] Caches are synced for namespace controller
W0814 11:15:07.339] I0814 11:15:07.326668   53109 controller_utils.go:1036] Caches are synced for service account controller
W0814 11:15:07.339] I0814 11:15:07.329149   49637 controller.go:606] quota admission added evaluator for: serviceaccounts
W0814 11:15:07.368] W0814 11:15:07.367478   53109 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0814 11:15:07.406] I0814 11:15:07.405985   53109 controller_utils.go:1036] Caches are synced for PVC protection controller
W0814 11:15:07.426] I0814 11:15:07.425478   53109 controller_utils.go:1036] Caches are synced for attach detach controller
W0814 11:15:07.432] I0814 11:15:07.432041   53109 controller_utils.go:1036] Caches are synced for persistent volume controller
W0814 11:15:07.450] I0814 11:15:07.449698   53109 controller_utils.go:1036] Caches are synced for stateful set controller
W0814 11:15:07.450] I0814 11:15:07.450152   53109 controller_utils.go:1036] Caches are synced for expand controller
I0814 11:15:07.551] node/127.0.0.1 created
... skipping 88 lines ...
I0814 11:15:11.472] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:15:11.475] +++ command: run_RESTMapper_evaluation_tests
I0814 11:15:11.486] +++ [0814 11:15:11] Creating namespace namespace-1565781311-4149
I0814 11:15:11.555] namespace/namespace-1565781311-4149 created
I0814 11:15:11.622] Context "test" modified.
I0814 11:15:11.629] +++ [0814 11:15:11] Testing RESTMapper
I0814 11:15:11.731] +++ [0814 11:15:11] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0814 11:15:11.746] +++ exit code: 0
I0814 11:15:11.861] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0814 11:15:11.861] bindings                                                                      true         Binding
I0814 11:15:11.862] componentstatuses                 cs                                          false        ComponentStatus
I0814 11:15:11.862] configmaps                        cm                                          true         ConfigMap
I0814 11:15:11.862] endpoints                         ep                                          true         Endpoints
... skipping 646 lines ...
I0814 11:15:29.467] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:15:29.633] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:15:29.726] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:15:29.895] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:15:29.990] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:15:30.079] (Bpod "valid-pod" force deleted
W0814 11:15:30.179] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0814 11:15:30.180] error: setting 'all' parameter but found a non empty selector. 
W0814 11:15:30.180] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 11:15:30.281] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:15:30.288] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0814 11:15:30.370] (Bnamespace/test-kubectl-describe-pod created
I0814 11:15:30.463] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0814 11:15:30.549] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0814 11:15:31.583] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0814 11:15:31.660] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0814 11:15:31.756] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0814 11:15:31.922] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:15:32.124] (Bpod/env-test-pod created
W0814 11:15:32.225] I0814 11:15:31.143820   49637 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0814 11:15:32.226] error: min-available and max-unavailable cannot be both specified
I0814 11:15:32.326] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0814 11:15:32.326] Name:         env-test-pod
I0814 11:15:32.326] Namespace:    test-kubectl-describe-pod
I0814 11:15:32.326] Priority:     0
I0814 11:15:32.326] Node:         <none>
I0814 11:15:32.327] Labels:       <none>
... skipping 173 lines ...
I0814 11:15:45.679] (Bpod/valid-pod patched
I0814 11:15:45.776] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0814 11:15:45.856] (Bpod/valid-pod patched
I0814 11:15:45.949] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0814 11:15:46.116] (Bpod/valid-pod patched
I0814 11:15:46.227] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 11:15:46.408] (B+++ [0814 11:15:46] "kubectl patch with resourceVersion 496" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0814 11:15:46.659] pod "valid-pod" deleted
I0814 11:15:46.670] pod/valid-pod replaced
I0814 11:15:46.769] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0814 11:15:46.930] (BSuccessful
I0814 11:15:46.931] message:error: --grace-period must have --force specified
I0814 11:15:46.931] has:\-\-grace-period must have \-\-force specified
I0814 11:15:47.102] Successful
I0814 11:15:47.103] message:error: --timeout must have --force specified
I0814 11:15:47.103] has:\-\-timeout must have \-\-force specified
I0814 11:15:47.266] node/node-v1-test created
W0814 11:15:47.367] W0814 11:15:47.265423   53109 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0814 11:15:47.468] node/node-v1-test replaced
I0814 11:15:47.541] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0814 11:15:47.622] (Bnode "node-v1-test" deleted
I0814 11:15:47.727] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 11:15:48.079] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0814 11:15:49.476] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 25 lines ...
I0814 11:15:49.794]     name: kubernetes-pause
I0814 11:15:49.794] has:localonlyvalue
I0814 11:15:49.849] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0814 11:15:50.089] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0814 11:15:50.229] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0814 11:15:50.353] (Bpod/valid-pod labeled
W0814 11:15:50.454] error: 'name' already has a value (valid-pod), and --overwrite is false
I0814 11:15:50.555] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0814 11:15:50.642] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:15:50.764] (Bpod "valid-pod" force deleted
W0814 11:15:50.865] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 11:15:50.966] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:15:50.967] (B+++ [0814 11:15:50] Creating namespace namespace-1565781350-3129
... skipping 82 lines ...
I0814 11:15:58.289] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0814 11:15:58.292] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:15:58.295] +++ command: run_kubectl_create_error_tests
I0814 11:15:58.308] +++ [0814 11:15:58] Creating namespace namespace-1565781358-19463
I0814 11:15:58.386] namespace/namespace-1565781358-19463 created
I0814 11:15:58.459] Context "test" modified.
I0814 11:15:58.466] +++ [0814 11:15:58] Testing kubectl create with error
W0814 11:15:58.567] Error: must specify one of -f and -k
W0814 11:15:58.568] 
W0814 11:15:58.568] Create a resource from a file or from stdin.
W0814 11:15:58.568] 
W0814 11:15:58.568]  JSON and YAML formats are accepted.
W0814 11:15:58.568] 
W0814 11:15:58.568] Examples:
... skipping 41 lines ...
W0814 11:15:58.573] 
W0814 11:15:58.573] Usage:
W0814 11:15:58.573]   kubectl create -f FILENAME [options]
W0814 11:15:58.573] 
W0814 11:15:58.573] Use "kubectl <command> --help" for more information about a given command.
W0814 11:15:58.574] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0814 11:15:58.696] +++ [0814 11:15:58] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0814 11:15:58.797] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 11:15:58.797] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 11:15:58.898] +++ exit code: 0
I0814 11:15:58.912] Recording: run_kubectl_apply_tests
I0814 11:15:58.912] Running command: run_kubectl_apply_tests
I0814 11:15:58.932] 
... skipping 20 lines ...
W0814 11:16:00.998] I0814 11:16:00.997797   49637 client.go:354] scheme "" not registered, fallback to default scheme
W0814 11:16:00.999] I0814 11:16:00.998090   49637 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0814 11:16:00.999] I0814 11:16:00.998308   49637 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 11:16:01.000] I0814 11:16:01.000035   49637 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 11:16:01.002] I0814 11:16:01.001733   49637 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0814 11:16:01.103] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0814 11:16:01.203] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0814 11:16:01.304] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 11:16:01.304] +++ exit code: 0
I0814 11:16:01.304] Recording: run_kubectl_run_tests
I0814 11:16:01.304] Running command: run_kubectl_run_tests
I0814 11:16:01.304] 
I0814 11:16:01.305] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 94 lines ...
I0814 11:16:03.768] Context "test" modified.
I0814 11:16:03.775] +++ [0814 11:16:03] Testing kubectl create filter
I0814 11:16:03.860] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:16:04.056] (Bpod/selector-test-pod created
I0814 11:16:04.150] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0814 11:16:04.234] (BSuccessful
I0814 11:16:04.234] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0814 11:16:04.234] has:pods "selector-test-pod-dont-apply" not found
I0814 11:16:04.312] pod "selector-test-pod" deleted
I0814 11:16:04.330] +++ exit code: 0
I0814 11:16:04.360] Recording: run_kubectl_apply_deployments_tests
I0814 11:16:04.361] Running command: run_kubectl_apply_deployments_tests
I0814 11:16:04.382] 
... skipping 27 lines ...
I0814 11:16:06.204] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:16:06.292] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:16:06.379] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:16:06.545] (Bdeployment.apps/nginx created
I0814 11:16:06.645] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0814 11:16:10.888] (BSuccessful
I0814 11:16:10.888] message:Error from server (Conflict): error when applying patch:
I0814 11:16:10.889] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565781364-24944\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0814 11:16:10.890] to:
I0814 11:16:10.890] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0814 11:16:10.890] Name: "nginx", Namespace: "namespace-1565781364-24944"
I0814 11:16:10.893] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565781364-24944\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-08-14T11:16:06Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]] "k:{\"type\":\"Progressing\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-14T11:16:06Z"] map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map["f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-08-14T11:16:06Z"]] "name":"nginx" "namespace":"namespace-1565781364-24944" "resourceVersion":"589" "selfLink":"/apis/apps/v1/namespaces/namespace-1565781364-24944/deployments/nginx" "uid":"b193be9f-3f91-4727-8777-7d0cdbd69ef5"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-08-14T11:16:06Z" "lastUpdateTime":"2019-08-14T11:16:06Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-08-14T11:16:06Z" "lastUpdateTime":"2019-08-14T11:16:06Z" "message":"ReplicaSet \"nginx-7dbc4d9f\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0814 11:16:10.893] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0814 11:16:10.893] has:Error from server (Conflict)
W0814 11:16:10.994] I0814 11:16:06.547899   53109 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565781364-24944", Name:"nginx", UID:"b193be9f-3f91-4727-8777-7d0cdbd69ef5", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7dbc4d9f to 3
W0814 11:16:10.995] I0814 11:16:06.551566   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781364-24944", Name:"nginx-7dbc4d9f", UID:"4ca9d6dc-ac49-44cf-9885-cd3b75361f33", APIVersion:"apps/v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-7qxss
W0814 11:16:10.996] I0814 11:16:06.554638   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781364-24944", Name:"nginx-7dbc4d9f", UID:"4ca9d6dc-ac49-44cf-9885-cd3b75361f33", APIVersion:"apps/v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-jtw2b
W0814 11:16:10.996] I0814 11:16:06.558305   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781364-24944", Name:"nginx-7dbc4d9f", UID:"4ca9d6dc-ac49-44cf-9885-cd3b75361f33", APIVersion:"apps/v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-f4vkr
W0814 11:16:12.723] I0814 11:16:12.722502   53109 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1565781355-927
I0814 11:16:16.158] deployment.apps/nginx configured
... skipping 2 lines ...
I0814 11:16:16.255]           "name": "nginx2"
I0814 11:16:16.255] has:"name": "nginx2"
W0814 11:16:16.355] I0814 11:16:16.163918   53109 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565781364-24944", Name:"nginx", UID:"31c4aa71-493a-4c9f-b11b-6848cf0906de", APIVersion:"apps/v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0814 11:16:16.356] I0814 11:16:16.167977   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781364-24944", Name:"nginx-594f77b9f6", UID:"db771711-27df-477e-9971-080b86e495ee", APIVersion:"apps/v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-96z6n
W0814 11:16:16.356] I0814 11:16:16.171761   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781364-24944", Name:"nginx-594f77b9f6", UID:"db771711-27df-477e-9971-080b86e495ee", APIVersion:"apps/v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-rjzw6
W0814 11:16:16.357] I0814 11:16:16.172728   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781364-24944", Name:"nginx-594f77b9f6", UID:"db771711-27df-477e-9971-080b86e495ee", APIVersion:"apps/v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-62hlm
W0814 11:16:20.547] E0814 11:16:20.546710   53109 replica_set.go:450] Sync "namespace-1565781364-24944/nginx-594f77b9f6" failed with Operation cannot be fulfilled on replicasets.apps "nginx-594f77b9f6": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1565781364-24944/nginx-594f77b9f6, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: db771711-27df-477e-9971-080b86e495ee, UID in object meta: 
W0814 11:16:21.524] I0814 11:16:21.523334   53109 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565781364-24944", Name:"nginx", UID:"1d8601fd-acec-4dd4-9161-c2549775eed3", APIVersion:"apps/v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0814 11:16:21.529] I0814 11:16:21.528671   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781364-24944", Name:"nginx-594f77b9f6", UID:"11f7ed1a-c853-469a-9879-0476abfebf0c", APIVersion:"apps/v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-qdbt8
W0814 11:16:21.535] I0814 11:16:21.534308   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781364-24944", Name:"nginx-594f77b9f6", UID:"11f7ed1a-c853-469a-9879-0476abfebf0c", APIVersion:"apps/v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-c78gn
W0814 11:16:21.536] I0814 11:16:21.536250   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781364-24944", Name:"nginx-594f77b9f6", UID:"11f7ed1a-c853-469a-9879-0476abfebf0c", APIVersion:"apps/v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-hlztq
I0814 11:16:21.637] Successful
I0814 11:16:21.638] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 159 lines ...
I0814 11:16:23.627] +++ [0814 11:16:23] Creating namespace namespace-1565781383-19403
I0814 11:16:23.702] namespace/namespace-1565781383-19403 created
I0814 11:16:23.771] Context "test" modified.
I0814 11:16:23.778] +++ [0814 11:16:23] Testing kubectl get
I0814 11:16:23.869] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:16:23.957] (BSuccessful
I0814 11:16:23.957] message:Error from server (NotFound): pods "abc" not found
I0814 11:16:23.958] has:pods "abc" not found
I0814 11:16:24.045] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:16:24.134] (BSuccessful
I0814 11:16:24.134] message:Error from server (NotFound): pods "abc" not found
I0814 11:16:24.134] has:pods "abc" not found
I0814 11:16:24.231] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:16:24.319] (BSuccessful
I0814 11:16:24.319] message:{
I0814 11:16:24.320]     "apiVersion": "v1",
I0814 11:16:24.320]     "items": [],
... skipping 23 lines ...
I0814 11:16:24.656] has not:No resources found
I0814 11:16:24.737] Successful
I0814 11:16:24.738] message:NAME
I0814 11:16:24.738] has not:No resources found
I0814 11:16:24.823] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:16:24.924] (BSuccessful
I0814 11:16:24.924] message:error: the server doesn't have a resource type "foobar"
I0814 11:16:24.924] has not:No resources found
I0814 11:16:25.008] Successful
I0814 11:16:25.009] message:No resources found in namespace-1565781383-19403 namespace.
I0814 11:16:25.009] has:No resources found
I0814 11:16:25.089] Successful
I0814 11:16:25.089] message:
I0814 11:16:25.089] has not:No resources found
I0814 11:16:25.167] Successful
I0814 11:16:25.167] message:No resources found in namespace-1565781383-19403 namespace.
I0814 11:16:25.167] has:No resources found
I0814 11:16:25.254] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:16:25.340] (BSuccessful
I0814 11:16:25.341] message:Error from server (NotFound): pods "abc" not found
I0814 11:16:25.341] has:pods "abc" not found
I0814 11:16:25.342] FAIL!
I0814 11:16:25.342] message:Error from server (NotFound): pods "abc" not found
I0814 11:16:25.342] has not:List
I0814 11:16:25.343] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0814 11:16:25.452] Successful
I0814 11:16:25.452] message:I0814 11:16:25.406164   63689 loader.go:375] Config loaded from file:  /tmp/tmp.2yLRPQmPLp/.kube/config
I0814 11:16:25.453] I0814 11:16:25.407939   63689 round_trippers.go:471] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0814 11:16:25.453] I0814 11:16:25.429458   63689 round_trippers.go:471] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0814 11:16:31.052] Successful
I0814 11:16:31.053] message:NAME    DATA   AGE
I0814 11:16:31.053] one     0      1s
I0814 11:16:31.053] three   0      1s
I0814 11:16:31.053] two     0      1s
I0814 11:16:31.053] STATUS    REASON          MESSAGE
I0814 11:16:31.054] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 11:16:31.054] has not:watch is only supported on individual resources
I0814 11:16:32.154] Successful
I0814 11:16:32.155] message:STATUS    REASON          MESSAGE
I0814 11:16:32.155] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 11:16:32.155] has not:watch is only supported on individual resources
I0814 11:16:32.160] +++ [0814 11:16:32] Creating namespace namespace-1565781392-12771
I0814 11:16:32.237] namespace/namespace-1565781392-12771 created
I0814 11:16:32.313] Context "test" modified.
I0814 11:16:32.414] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:16:32.595] (Bpod/valid-pod created
... skipping 104 lines ...
I0814 11:16:32.697] }
I0814 11:16:32.796] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:16:33.052] (B<no value>Successful
I0814 11:16:33.052] message:valid-pod:
I0814 11:16:33.052] has:valid-pod:
I0814 11:16:33.140] Successful
I0814 11:16:33.141] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0814 11:16:33.141] 	template was:
I0814 11:16:33.141] 		{.missing}
I0814 11:16:33.141] 	object given to jsonpath engine was:
I0814 11:16:33.144] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-08-14T11:16:32Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-08-14T11:16:32Z"}}, "name":"valid-pod", "namespace":"namespace-1565781392-12771", "resourceVersion":"690", "selfLink":"/api/v1/namespaces/namespace-1565781392-12771/pods/valid-pod", "uid":"a589c725-b342-4c63-8a41-0f5286f76f24"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0814 11:16:33.144] has:missing is not found
I0814 11:16:33.234] Successful
I0814 11:16:33.235] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0814 11:16:33.235] 	template was:
I0814 11:16:33.235] 		{{.missing}}
I0814 11:16:33.235] 	raw data was:
I0814 11:16:33.237] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-08-14T11:16:32Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-08-14T11:16:32Z"}],"name":"valid-pod","namespace":"namespace-1565781392-12771","resourceVersion":"690","selfLink":"/api/v1/namespaces/namespace-1565781392-12771/pods/valid-pod","uid":"a589c725-b342-4c63-8a41-0f5286f76f24"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0814 11:16:33.237] 	object given to template engine was:
I0814 11:16:33.239] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-08-14T11:16:32Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-08-14T11:16:32Z]] name:valid-pod namespace:namespace-1565781392-12771 resourceVersion:690 selfLink:/api/v1/namespaces/namespace-1565781392-12771/pods/valid-pod uid:a589c725-b342-4c63-8a41-0f5286f76f24] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0814 11:16:33.239] has:map has no entry for key "missing"
W0814 11:16:33.340] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0814 11:16:34.331] Successful
I0814 11:16:34.332] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 11:16:34.332] valid-pod   0/1     Pending   0          1s
I0814 11:16:34.332] STATUS      REASON          MESSAGE
I0814 11:16:34.332] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 11:16:34.332] has:STATUS
I0814 11:16:34.333] Successful
I0814 11:16:34.333] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 11:16:34.333] valid-pod   0/1     Pending   0          1s
I0814 11:16:34.333] STATUS      REASON          MESSAGE
I0814 11:16:34.334] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 11:16:34.334] has:valid-pod
I0814 11:16:35.430] Successful
I0814 11:16:35.430] message:pod/valid-pod
I0814 11:16:35.430] has not:STATUS
I0814 11:16:35.432] Successful
I0814 11:16:35.432] message:pod/valid-pod
... skipping 144 lines ...
I0814 11:16:36.537] status:
I0814 11:16:36.538]   phase: Pending
I0814 11:16:36.538]   qosClass: Guaranteed
I0814 11:16:36.538] ---
I0814 11:16:36.538] has:name: valid-pod
I0814 11:16:36.617] Successful
I0814 11:16:36.617] message:Error from server (NotFound): pods "invalid-pod" not found
I0814 11:16:36.617] has:"invalid-pod" not found
I0814 11:16:36.697] pod "valid-pod" deleted
I0814 11:16:36.796] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:16:36.954] (Bpod/redis-master created
I0814 11:16:36.958] pod/valid-pod created
I0814 11:16:37.050] Successful
... skipping 35 lines ...
I0814 11:16:38.231] +++ command: run_kubectl_exec_pod_tests
I0814 11:16:38.244] +++ [0814 11:16:38] Creating namespace namespace-1565781398-16679
I0814 11:16:38.316] namespace/namespace-1565781398-16679 created
I0814 11:16:38.392] Context "test" modified.
I0814 11:16:38.398] +++ [0814 11:16:38] Testing kubectl exec POD COMMAND
I0814 11:16:38.485] Successful
I0814 11:16:38.486] message:Error from server (NotFound): pods "abc" not found
I0814 11:16:38.486] has:pods "abc" not found
I0814 11:16:38.640] pod/test-pod created
I0814 11:16:38.749] Successful
I0814 11:16:38.749] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 11:16:38.749] has not:pods "test-pod" not found
I0814 11:16:38.751] Successful
I0814 11:16:38.751] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 11:16:38.752] has not:pod or type/name must be specified
I0814 11:16:38.836] pod "test-pod" deleted
I0814 11:16:38.857] +++ exit code: 0
I0814 11:16:38.894] Recording: run_kubectl_exec_resource_name_tests
I0814 11:16:38.895] Running command: run_kubectl_exec_resource_name_tests
I0814 11:16:38.918] 
... skipping 2 lines ...
I0814 11:16:38.926] +++ command: run_kubectl_exec_resource_name_tests
I0814 11:16:38.940] +++ [0814 11:16:38] Creating namespace namespace-1565781398-15931
I0814 11:16:39.031] namespace/namespace-1565781398-15931 created
I0814 11:16:39.108] Context "test" modified.
I0814 11:16:39.116] +++ [0814 11:16:39] Testing kubectl exec TYPE/NAME COMMAND
I0814 11:16:39.220] Successful
I0814 11:16:39.220] message:error: the server doesn't have a resource type "foo"
I0814 11:16:39.220] has:error:
I0814 11:16:39.308] Successful
I0814 11:16:39.308] message:Error from server (NotFound): deployments.apps "bar" not found
I0814 11:16:39.309] has:"bar" not found
I0814 11:16:39.464] pod/test-pod created
I0814 11:16:39.639] replicaset.apps/frontend created
W0814 11:16:39.740] I0814 11:16:39.643889   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781398-15931", Name:"frontend", UID:"4e8a8684-6870-45ca-a99a-82a05296998d", APIVersion:"apps/v1", ResourceVersion:"743", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6cgzk
W0814 11:16:39.741] I0814 11:16:39.647894   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781398-15931", Name:"frontend", UID:"4e8a8684-6870-45ca-a99a-82a05296998d", APIVersion:"apps/v1", ResourceVersion:"743", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-p7skh
W0814 11:16:39.742] I0814 11:16:39.649681   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781398-15931", Name:"frontend", UID:"4e8a8684-6870-45ca-a99a-82a05296998d", APIVersion:"apps/v1", ResourceVersion:"743", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ldw66
I0814 11:16:39.842] configmap/test-set-env-config created
I0814 11:16:39.882] Successful
I0814 11:16:39.882] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0814 11:16:39.883] has:not implemented
I0814 11:16:39.976] Successful
I0814 11:16:39.976] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 11:16:39.976] has not:not found
I0814 11:16:39.978] Successful
I0814 11:16:39.978] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 11:16:39.978] has not:pod or type/name must be specified
I0814 11:16:40.084] Successful
I0814 11:16:40.084] message:Error from server (BadRequest): pod frontend-6cgzk does not have a host assigned
I0814 11:16:40.085] has not:not found
I0814 11:16:40.086] Successful
I0814 11:16:40.087] message:Error from server (BadRequest): pod frontend-6cgzk does not have a host assigned
I0814 11:16:40.087] has not:pod or type/name must be specified
I0814 11:16:40.180] pod "test-pod" deleted
I0814 11:16:40.268] replicaset.apps "frontend" deleted
I0814 11:16:40.356] configmap "test-set-env-config" deleted
I0814 11:16:40.376] +++ exit code: 0
I0814 11:16:40.416] Recording: run_create_secret_tests
I0814 11:16:40.417] Running command: run_create_secret_tests
I0814 11:16:40.439] 
I0814 11:16:40.442] +++ Running case: test-cmd.run_create_secret_tests 
I0814 11:16:40.444] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:16:40.447] +++ command: run_create_secret_tests
I0814 11:16:40.542] Successful
I0814 11:16:40.542] message:Error from server (NotFound): secrets "mysecret" not found
I0814 11:16:40.543] has:secrets "mysecret" not found
I0814 11:16:40.702] Successful
I0814 11:16:40.702] message:Error from server (NotFound): secrets "mysecret" not found
I0814 11:16:40.703] has:secrets "mysecret" not found
I0814 11:16:40.704] Successful
I0814 11:16:40.704] message:user-specified
I0814 11:16:40.704] has:user-specified
I0814 11:16:40.778] Successful
I0814 11:16:40.855] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"80e9f2ff-455e-4c11-8880-a131b287b659","resourceVersion":"764","creationTimestamp":"2019-08-14T11:16:40Z"}}
... skipping 2 lines ...
I0814 11:16:41.028] has:uid
I0814 11:16:41.108] Successful
I0814 11:16:41.109] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"80e9f2ff-455e-4c11-8880-a131b287b659","resourceVersion":"765","creationTimestamp":"2019-08-14T11:16:40Z","managedFields":[{"manager":"kubectl","operation":"Update","apiVersion":"v1","time":"2019-08-14T11:16:41Z","fields":{"f:data":{"f:key1":{},".":{}}}}]},"data":{"key1":"config1"}}
I0814 11:16:41.109] has:config1
I0814 11:16:41.183] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"80e9f2ff-455e-4c11-8880-a131b287b659"}}
I0814 11:16:41.275] Successful
I0814 11:16:41.276] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0814 11:16:41.276] has:configmaps "tester-update-cm" not found
I0814 11:16:41.290] +++ exit code: 0
I0814 11:16:41.328] Recording: run_kubectl_create_kustomization_directory_tests
I0814 11:16:41.328] Running command: run_kubectl_create_kustomization_directory_tests
I0814 11:16:41.352] 
I0814 11:16:41.355] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 158 lines ...
W0814 11:16:44.097] I0814 11:16:41.841181   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781398-15931", Name:"test-the-deployment-55cf944b", UID:"cd73e473-e6de-4c92-a220-5246a7a606c7", APIVersion:"apps/v1", ResourceVersion:"773", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-55cf944b-2mbck
W0814 11:16:44.098] I0814 11:16:41.843624   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781398-15931", Name:"test-the-deployment-55cf944b", UID:"cd73e473-e6de-4c92-a220-5246a7a606c7", APIVersion:"apps/v1", ResourceVersion:"773", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-55cf944b-q7r7b
I0814 11:16:45.076] Successful
I0814 11:16:45.077] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 11:16:45.077] valid-pod   0/1     Pending   0          1s
I0814 11:16:45.077] STATUS      REASON          MESSAGE
I0814 11:16:45.077] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 11:16:45.077] has:Timeout exceeded while reading body
I0814 11:16:45.166] Successful
I0814 11:16:45.166] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 11:16:45.166] valid-pod   0/1     Pending   0          2s
I0814 11:16:45.166] has:valid-pod
I0814 11:16:45.237] Successful
I0814 11:16:45.238] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0814 11:16:45.238] has:Invalid timeout value
I0814 11:16:45.317] pod "valid-pod" deleted
I0814 11:16:45.337] +++ exit code: 0
I0814 11:16:45.373] Recording: run_crd_tests
I0814 11:16:45.373] Running command: run_crd_tests
I0814 11:16:45.396] 
... skipping 244 lines ...
I0814 11:16:49.872] foo.company.com/test patched
I0814 11:16:49.963] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0814 11:16:50.045] (Bfoo.company.com/test patched
I0814 11:16:50.137] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0814 11:16:50.222] (Bfoo.company.com/test patched
I0814 11:16:50.308] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0814 11:16:50.453] (B+++ [0814 11:16:50] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0814 11:16:50.515] {
I0814 11:16:50.516]     "apiVersion": "company.com/v1",
I0814 11:16:50.516]     "kind": "Foo",
I0814 11:16:50.516]     "metadata": {
I0814 11:16:50.516]         "annotations": {
I0814 11:16:50.516]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 348 lines ...
I0814 11:17:15.533] (Bnamespace/non-native-resources created
I0814 11:17:15.701] bar.company.com/test created
I0814 11:17:15.806] crd.sh:455: Successful get bars {{len .items}}: 1
I0814 11:17:15.889] (Bnamespace "non-native-resources" deleted
I0814 11:17:21.100] crd.sh:458: Successful get bars {{len .items}}: 0
I0814 11:17:21.263] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0814 11:17:21.364] Error from server (NotFound): namespaces "non-native-resources" not found
I0814 11:17:21.465] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0814 11:17:21.469] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 11:17:21.576] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0814 11:17:21.608] +++ exit code: 0
I0814 11:17:21.647] Recording: run_cmd_with_img_tests
I0814 11:17:21.647] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0814 11:17:21.982] I0814 11:17:21.978319   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781441-15717", Name:"test1-9797f89d8", UID:"9cd14bf8-ec18-43d2-8c76-a2252afa5cfa", APIVersion:"apps/v1", ResourceVersion:"920", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-9797f89d8-bc4pw
I0814 11:17:22.083] Successful
I0814 11:17:22.083] message:deployment.apps/test1 created
I0814 11:17:22.084] has:deployment.apps/test1 created
I0814 11:17:22.084] deployment.apps "test1" deleted
I0814 11:17:22.170] Successful
I0814 11:17:22.171] message:error: Invalid image name "InvalidImageName": invalid reference format
I0814 11:17:22.171] has:error: Invalid image name "InvalidImageName": invalid reference format
I0814 11:17:22.184] +++ exit code: 0
I0814 11:17:22.225] +++ [0814 11:17:22] Testing recursive resources
I0814 11:17:22.232] +++ [0814 11:17:22] Creating namespace namespace-1565781442-29710
I0814 11:17:22.308] namespace/namespace-1565781442-29710 created
I0814 11:17:22.382] Context "test" modified.
I0814 11:17:22.475] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:17:22.775] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:22.777] (BSuccessful
I0814 11:17:22.777] message:pod/busybox0 created
I0814 11:17:22.777] pod/busybox1 created
I0814 11:17:22.778] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 11:17:22.778] has:error validating data: kind not set
I0814 11:17:22.870] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:23.050] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0814 11:17:23.052] (BSuccessful
I0814 11:17:23.053] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:17:23.053] has:Object 'Kind' is missing
I0814 11:17:23.144] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:23.426] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 11:17:23.429] (BSuccessful
I0814 11:17:23.429] message:pod/busybox0 replaced
I0814 11:17:23.429] pod/busybox1 replaced
I0814 11:17:23.430] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 11:17:23.430] has:error validating data: kind not set
I0814 11:17:23.528] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:23.649] (BSuccessful
I0814 11:17:23.650] message:Name:         busybox0
I0814 11:17:23.651] Namespace:    namespace-1565781442-29710
I0814 11:17:23.651] Priority:     0
I0814 11:17:23.651] Node:         <none>
... skipping 159 lines ...
I0814 11:17:23.689] has:Object 'Kind' is missing
I0814 11:17:23.747] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:23.928] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0814 11:17:23.931] (BSuccessful
I0814 11:17:23.931] message:pod/busybox0 annotated
I0814 11:17:23.932] pod/busybox1 annotated
I0814 11:17:23.932] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:17:23.932] has:Object 'Kind' is missing
I0814 11:17:24.023] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:24.323] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 11:17:24.325] (BSuccessful
I0814 11:17:24.325] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 11:17:24.325] pod/busybox0 configured
I0814 11:17:24.326] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 11:17:24.326] pod/busybox1 configured
I0814 11:17:24.326] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 11:17:24.327] has:error validating data: kind not set
I0814 11:17:24.426] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:17:24.588] (Bdeployment.apps/nginx created
W0814 11:17:24.689] W0814 11:17:22.276029   49637 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 11:17:24.690] E0814 11:17:22.277766   53109 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:24.690] W0814 11:17:22.379937   49637 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 11:17:24.690] E0814 11:17:22.381636   53109 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:24.691] W0814 11:17:22.480687   49637 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 11:17:24.691] E0814 11:17:22.482656   53109 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:24.691] W0814 11:17:22.588814   49637 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 11:17:24.691] E0814 11:17:22.590761   53109 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:24.691] E0814 11:17:23.279275   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:24.692] E0814 11:17:23.383447   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:24.692] E0814 11:17:23.484285   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:24.692] E0814 11:17:23.592752   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:24.692] E0814 11:17:24.280776   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:24.692] E0814 11:17:24.385276   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:24.693] E0814 11:17:24.486115   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:24.693] I0814 11:17:24.593439   53109 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565781442-29710", Name:"nginx", UID:"f11d404b-0935-4881-b54e-3ef38a7f87ca", APIVersion:"apps/v1", ResourceVersion:"944", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0814 11:17:24.693] E0814 11:17:24.593915   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:24.694] I0814 11:17:24.598258   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781442-29710", Name:"nginx-bbbbb95b5", UID:"58289262-ed0d-41e7-b231-2ad97c24ab5c", APIVersion:"apps/v1", ResourceVersion:"945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-9dfbh
W0814 11:17:24.694] I0814 11:17:24.601103   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781442-29710", Name:"nginx-bbbbb95b5", UID:"58289262-ed0d-41e7-b231-2ad97c24ab5c", APIVersion:"apps/v1", ResourceVersion:"945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-8dccx
W0814 11:17:24.694] I0814 11:17:24.601217   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781442-29710", Name:"nginx-bbbbb95b5", UID:"58289262-ed0d-41e7-b231-2ad97c24ab5c", APIVersion:"apps/v1", ResourceVersion:"945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-b8b59
I0814 11:17:24.795] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 11:17:24.795] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0814 11:17:24.960] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
... skipping 45 lines ...
W0814 11:17:25.145] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 11:17:25.246] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:25.342] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:25.345] (BSuccessful
I0814 11:17:25.345] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0814 11:17:25.345] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 11:17:25.345] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:17:25.345] has:Object 'Kind' is missing
I0814 11:17:25.438] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:25.527] (BSuccessful
I0814 11:17:25.528] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:17:25.528] has:busybox0:busybox1:
I0814 11:17:25.531] Successful
I0814 11:17:25.531] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:17:25.532] has:Object 'Kind' is missing
I0814 11:17:25.630] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:25.725] (Bpod/busybox0 labeled
I0814 11:17:25.725] pod/busybox1 labeled
I0814 11:17:25.726] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:17:25.820] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0814 11:17:25.822] (BSuccessful
I0814 11:17:25.822] message:pod/busybox0 labeled
I0814 11:17:25.822] pod/busybox1 labeled
I0814 11:17:25.822] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:17:25.823] has:Object 'Kind' is missing
I0814 11:17:25.917] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:26.007] (Bpod/busybox0 patched
I0814 11:17:26.007] pod/busybox1 patched
I0814 11:17:26.008] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:17:26.108] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0814 11:17:26.110] (BSuccessful
I0814 11:17:26.110] message:pod/busybox0 patched
I0814 11:17:26.111] pod/busybox1 patched
I0814 11:17:26.111] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:17:26.112] has:Object 'Kind' is missing
I0814 11:17:26.206] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:26.379] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:17:26.381] (BSuccessful
I0814 11:17:26.381] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 11:17:26.382] pod "busybox0" force deleted
I0814 11:17:26.382] pod "busybox1" force deleted
I0814 11:17:26.382] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 11:17:26.382] has:Object 'Kind' is missing
I0814 11:17:26.470] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:17:26.624] (Breplicationcontroller/busybox0 created
I0814 11:17:26.630] replicationcontroller/busybox1 created
I0814 11:17:26.735] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:26.832] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:26.930] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 11:17:27.027] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 11:17:27.217] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 11:17:27.309] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 11:17:27.312] (BSuccessful
I0814 11:17:27.313] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0814 11:17:27.313] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0814 11:17:27.313] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:17:27.313] has:Object 'Kind' is missing
I0814 11:17:27.397] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0814 11:17:27.481] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0814 11:17:27.581] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:27.673] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 11:17:27.770] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 11:17:27.969] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 11:17:28.054] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 11:17:28.056] (BSuccessful
I0814 11:17:28.057] message:service/busybox0 exposed
I0814 11:17:28.057] service/busybox1 exposed
I0814 11:17:28.058] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:17:28.058] has:Object 'Kind' is missing
I0814 11:17:28.159] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:28.243] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 11:17:28.326] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 11:17:28.517] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0814 11:17:28.605] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0814 11:17:28.607] (BSuccessful
I0814 11:17:28.608] message:replicationcontroller/busybox0 scaled
I0814 11:17:28.608] replicationcontroller/busybox1 scaled
I0814 11:17:28.608] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:17:28.608] has:Object 'Kind' is missing
I0814 11:17:28.699] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:28.879] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:17:28.881] (BSuccessful
I0814 11:17:28.881] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 11:17:28.881] replicationcontroller "busybox0" force deleted
I0814 11:17:28.881] replicationcontroller "busybox1" force deleted
I0814 11:17:28.882] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:17:28.882] has:Object 'Kind' is missing
I0814 11:17:28.970] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:17:29.116] (Bdeployment.apps/nginx1-deployment created
I0814 11:17:29.120] deployment.apps/nginx0-deployment created
I0814 11:17:29.223] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0814 11:17:29.316] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 11:17:29.521] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 11:17:29.524] (BSuccessful
I0814 11:17:29.524] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0814 11:17:29.524] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0814 11:17:29.525] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 11:17:29.525] has:Object 'Kind' is missing
I0814 11:17:29.620] deployment.apps/nginx1-deployment paused
I0814 11:17:29.626] deployment.apps/nginx0-deployment paused
I0814 11:17:29.730] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0814 11:17:29.732] (BSuccessful
I0814 11:17:29.733] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 11:17:29.733] has:Object 'Kind' is missing
I0814 11:17:29.826] deployment.apps/nginx1-deployment resumed
I0814 11:17:29.836] deployment.apps/nginx0-deployment resumed
I0814 11:17:29.940] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0814 11:17:29.942] (BSuccessful
I0814 11:17:29.942] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 11:17:29.942] has:Object 'Kind' is missing
W0814 11:17:30.043] E0814 11:17:25.282639   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.043] E0814 11:17:25.387091   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.044] E0814 11:17:25.487797   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.044] E0814 11:17:25.595236   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.044] E0814 11:17:26.284059   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.044] E0814 11:17:26.388782   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.045] E0814 11:17:26.489457   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.045] E0814 11:17:26.596906   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.045] I0814 11:17:26.629101   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781442-29710", Name:"busybox0", UID:"e169e14c-3b0b-40f7-b0ed-5f34b7cf671e", APIVersion:"v1", ResourceVersion:"975", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-qpl89
W0814 11:17:30.045] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 11:17:30.046] I0814 11:17:26.634537   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781442-29710", Name:"busybox1", UID:"313919f8-67c4-4570-b970-1740f87b2ea6", APIVersion:"v1", ResourceVersion:"977", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-lhfpg
W0814 11:17:30.046] E0814 11:17:27.285811   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.046] E0814 11:17:27.390238   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.047] E0814 11:17:27.490968   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.047] E0814 11:17:27.598316   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.047] E0814 11:17:28.287347   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.048] E0814 11:17:28.391514   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.048] I0814 11:17:28.416366   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781442-29710", Name:"busybox0", UID:"e169e14c-3b0b-40f7-b0ed-5f34b7cf671e", APIVersion:"v1", ResourceVersion:"997", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-264vl
W0814 11:17:30.048] I0814 11:17:28.428271   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781442-29710", Name:"busybox1", UID:"313919f8-67c4-4570-b970-1740f87b2ea6", APIVersion:"v1", ResourceVersion:"1001", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-xhlp8
W0814 11:17:30.049] E0814 11:17:28.492536   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.049] E0814 11:17:28.599532   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.049] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 11:17:30.050] I0814 11:17:29.121328   53109 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565781442-29710", Name:"nginx1-deployment", UID:"c1a3d147-1fe8-49fe-a535-0996a6db624a", APIVersion:"apps/v1", ResourceVersion:"1018", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-84f7f49fb7 to 2
W0814 11:17:30.050] I0814 11:17:29.125461   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781442-29710", Name:"nginx1-deployment-84f7f49fb7", UID:"997d88db-1a61-49ad-b868-93ad60a0d6d0", APIVersion:"apps/v1", ResourceVersion:"1020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-mpt8k
W0814 11:17:30.051] I0814 11:17:29.128053   53109 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565781442-29710", Name:"nginx0-deployment", UID:"4d00790a-a745-4fb6-9282-61f461bcd7c3", APIVersion:"apps/v1", ResourceVersion:"1019", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57475bf54d to 2
W0814 11:17:30.051] I0814 11:17:29.128646   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781442-29710", Name:"nginx1-deployment-84f7f49fb7", UID:"997d88db-1a61-49ad-b868-93ad60a0d6d0", APIVersion:"apps/v1", ResourceVersion:"1020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-zxs6m
W0814 11:17:30.052] I0814 11:17:29.133365   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781442-29710", Name:"nginx0-deployment-57475bf54d", UID:"da1ff144-8802-4f98-b927-7780c6bde744", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-6rbk5
W0814 11:17:30.052] I0814 11:17:29.136970   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565781442-29710", Name:"nginx0-deployment-57475bf54d", UID:"da1ff144-8802-4f98-b927-7780c6bde744", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-mpqjh
W0814 11:17:30.052] E0814 11:17:29.288925   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.052] E0814 11:17:29.393151   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.053] E0814 11:17:29.494547   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.053] E0814 11:17:29.601074   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.120] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 11:17:30.137] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 11:17:30.238] Successful
I0814 11:17:30.238] message:deployment.apps/nginx1-deployment 
I0814 11:17:30.238] REVISION  CHANGE-CAUSE
I0814 11:17:30.238] 1         <none>
I0814 11:17:30.238] 
I0814 11:17:30.238] deployment.apps/nginx0-deployment 
I0814 11:17:30.239] REVISION  CHANGE-CAUSE
I0814 11:17:30.239] 1         <none>
I0814 11:17:30.239] 
I0814 11:17:30.239] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 11:17:30.239] has:nginx0-deployment
I0814 11:17:30.239] Successful
I0814 11:17:30.240] message:deployment.apps/nginx1-deployment 
I0814 11:17:30.240] REVISION  CHANGE-CAUSE
I0814 11:17:30.240] 1         <none>
I0814 11:17:30.240] 
I0814 11:17:30.240] deployment.apps/nginx0-deployment 
I0814 11:17:30.240] REVISION  CHANGE-CAUSE
I0814 11:17:30.240] 1         <none>
I0814 11:17:30.240] 
I0814 11:17:30.240] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 11:17:30.241] has:nginx1-deployment
I0814 11:17:30.241] Successful
I0814 11:17:30.241] message:deployment.apps/nginx1-deployment 
I0814 11:17:30.241] REVISION  CHANGE-CAUSE
I0814 11:17:30.241] 1         <none>
I0814 11:17:30.241] 
I0814 11:17:30.241] deployment.apps/nginx0-deployment 
I0814 11:17:30.241] REVISION  CHANGE-CAUSE
I0814 11:17:30.241] 1         <none>
I0814 11:17:30.241] 
I0814 11:17:30.242] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 11:17:30.242] has:Object 'Kind' is missing
I0814 11:17:30.242] deployment.apps "nginx1-deployment" force deleted
I0814 11:17:30.242] deployment.apps "nginx0-deployment" force deleted
W0814 11:17:30.343] E0814 11:17:30.290799   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.395] E0814 11:17:30.394880   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.496] E0814 11:17:30.496169   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:30.604] E0814 11:17:30.603314   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:17:31.238] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:17:31.385] (Breplicationcontroller/busybox0 created
I0814 11:17:31.389] replicationcontroller/busybox1 created
I0814 11:17:31.489] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 11:17:31.591] (BSuccessful
I0814 11:17:31.591] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0814 11:17:31.595] message:no rollbacker has been implemented for "ReplicationController"
I0814 11:17:31.595] no rollbacker has been implemented for "ReplicationController"
I0814 11:17:31.596] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:17:31.596] has:Object 'Kind' is missing
I0814 11:17:31.687] Successful
I0814 11:17:31.688] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:17:31.688] error: replicationcontrollers "busybox0" pausing is not supported
I0814 11:17:31.688] error: replicationcontrollers "busybox1" pausing is not supported
I0814 11:17:31.688] has:Object 'Kind' is missing
I0814 11:17:31.689] Successful
I0814 11:17:31.690] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:17:31.690] error: replicationcontrollers "busybox0" pausing is not supported
I0814 11:17:31.690] error: replicationcontrollers "busybox1" pausing is not supported
I0814 11:17:31.690] has:replicationcontrollers "busybox0" pausing is not supported
I0814 11:17:31.691] Successful
I0814 11:17:31.692] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:17:31.692] error: replicationcontrollers "busybox0" pausing is not supported
I0814 11:17:31.693] error: replicationcontrollers "busybox1" pausing is not supported
I0814 11:17:31.693] has:replicationcontrollers "busybox1" pausing is not supported
I0814 11:17:31.785] Successful
I0814 11:17:31.786] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:17:31.786] error: replicationcontrollers "busybox0" resuming is not supported
I0814 11:17:31.787] error: replicationcontrollers "busybox1" resuming is not supported
I0814 11:17:31.787] has:Object 'Kind' is missing
I0814 11:17:31.787] Successful
I0814 11:17:31.788] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:17:31.788] error: replicationcontrollers "busybox0" resuming is not supported
I0814 11:17:31.789] error: replicationcontrollers "busybox1" resuming is not supported
I0814 11:17:31.789] has:replicationcontrollers "busybox0" resuming is not supported
I0814 11:17:31.789] Successful
I0814 11:17:31.790] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 11:17:31.790] error: replicationcontrollers "busybox0" resuming is not supported
I0814 11:17:31.790] error: replicationcontrollers "busybox1" resuming is not supported
I0814 11:17:31.791] has:replicationcontrollers "busybox0" resuming is not supported
I0814 11:17:31.864] replicationcontroller "busybox0" force deleted
I0814 11:17:31.868] replicationcontroller "busybox1" force deleted
W0814 11:17:31.969] E0814 11:17:31.292658   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:31.970] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 11:17:31.971] I0814 11:17:31.389975   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781442-29710", Name:"busybox0", UID:"edadd2a2-fbee-4705-88ba-49510f0be4c6", APIVersion:"v1", ResourceVersion:"1067", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-7lp7x
W0814 11:17:31.971] I0814 11:17:31.394364   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565781442-29710", Name:"busybox1", UID:"7955c60e-7e35-409a-967d-0d85ba5141d3", APIVersion:"v1", ResourceVersion:"1069", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-kntkb
W0814 11:17:31.971] E0814 11:17:31.397506   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:31.972] E0814 11:17:31.497728   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:31.972] E0814 11:17:31.605005   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:31.973] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 11:17:31.973] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0814 11:17:32.295] E0814 11:17:32.294372   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:32.400] E0814 11:17:32.399322   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:32.500] E0814 11:17:32.499446   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:32.608] E0814 11:17:32.607184   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:17:32.874] Recording: run_namespace_tests
I0814 11:17:32.875] Running command: run_namespace_tests
I0814 11:17:32.897] 
I0814 11:17:32.899] +++ Running case: test-cmd.run_namespace_tests 
I0814 11:17:32.901] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:17:32.904] +++ command: run_namespace_tests
I0814 11:17:32.914] +++ [0814 11:17:32] Testing kubectl(v1:namespaces)
I0814 11:17:32.986] namespace/my-namespace created
I0814 11:17:33.086] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 11:17:33.167] (Bnamespace "my-namespace" deleted
W0814 11:17:33.296] E0814 11:17:33.295970   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:33.401] E0814 11:17:33.400875   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:33.502] E0814 11:17:33.501466   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:33.609] E0814 11:17:33.609073   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:34.298] E0814 11:17:34.297783   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:34.403] E0814 11:17:34.402603   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:34.503] E0814 11:17:34.503275   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:34.611] E0814 11:17:34.610566   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:35.300] E0814 11:17:35.299323   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:35.405] E0814 11:17:35.404431   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:35.505] E0814 11:17:35.504795   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:35.613] E0814 11:17:35.612298   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:36.301] E0814 11:17:36.300946   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:36.406] E0814 11:17:36.406030   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:36.507] E0814 11:17:36.506405   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:36.615] E0814 11:17:36.614323   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:37.303] E0814 11:17:37.302522   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:37.408] E0814 11:17:37.407554   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:37.508] E0814 11:17:37.508065   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:37.616] E0814 11:17:37.616067   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:17:38.273] namespace/my-namespace condition met
I0814 11:17:38.369] Successful
I0814 11:17:38.370] message:Error from server (NotFound): namespaces "my-namespace" not found
I0814 11:17:38.370] has: not found
I0814 11:17:38.449] namespace/my-namespace created
I0814 11:17:38.550] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 11:17:38.780] (BSuccessful
I0814 11:17:38.781] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 11:17:38.781] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0814 11:17:38.784] namespace "namespace-1565781402-15196" deleted
I0814 11:17:38.784] namespace "namespace-1565781403-32677" deleted
I0814 11:17:38.785] namespace "namespace-1565781405-7168" deleted
I0814 11:17:38.785] namespace "namespace-1565781406-29962" deleted
I0814 11:17:38.785] namespace "namespace-1565781441-15717" deleted
I0814 11:17:38.785] namespace "namespace-1565781442-29710" deleted
I0814 11:17:38.785] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 11:17:38.785] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 11:17:38.785] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 11:17:38.785] has:warning: deleting cluster-scoped resources
I0814 11:17:38.785] Successful
I0814 11:17:38.786] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 11:17:38.786] namespace "kube-node-lease" deleted
I0814 11:17:38.786] namespace "my-namespace" deleted
I0814 11:17:38.786] namespace "namespace-1565781308-20241" deleted
... skipping 27 lines ...
I0814 11:17:38.788] namespace "namespace-1565781402-15196" deleted
I0814 11:17:38.789] namespace "namespace-1565781403-32677" deleted
I0814 11:17:38.789] namespace "namespace-1565781405-7168" deleted
I0814 11:17:38.789] namespace "namespace-1565781406-29962" deleted
I0814 11:17:38.789] namespace "namespace-1565781441-15717" deleted
I0814 11:17:38.789] namespace "namespace-1565781442-29710" deleted
I0814 11:17:38.789] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 11:17:38.789] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 11:17:38.789] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 11:17:38.789] has:namespace "my-namespace" deleted
I0814 11:17:38.895] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0814 11:17:38.968] (Bnamespace/other created
I0814 11:17:39.057] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0814 11:17:39.143] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:17:39.311] (Bpod/valid-pod created
I0814 11:17:39.413] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:17:39.507] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:17:39.589] (BSuccessful
I0814 11:17:39.590] message:error: a resource cannot be retrieved by name across all namespaces
I0814 11:17:39.590] has:a resource cannot be retrieved by name across all namespaces
I0814 11:17:39.680] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 11:17:39.755] (Bpod "valid-pod" force deleted
I0814 11:17:39.844] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:17:39.917] (Bnamespace "other" deleted
W0814 11:17:40.017] E0814 11:17:38.304270   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:40.018] E0814 11:17:38.409348   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:40.018] E0814 11:17:38.509567   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:40.019] E0814 11:17:38.617816   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:40.019] E0814 11:17:39.305606   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:40.019] E0814 11:17:39.410428   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:40.019] E0814 11:17:39.510926   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:40.019] I0814 11:17:39.555780   53109 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0814 11:17:40.019] I0814 11:17:39.613917   53109 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0814 11:17:40.020] E0814 11:17:39.619089   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:40.020] I0814 11:17:39.656223   53109 controller_utils.go:1036] Caches are synced for garbage collector controller
W0814 11:17:40.020] I0814 11:17:39.714288   53109 controller_utils.go:1036] Caches are synced for resource quota controller
W0814 11:17:40.020] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 11:17:40.308] E0814 11:17:40.307328   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:40.412] E0814 11:17:40.411916   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:40.513] E0814 11:17:40.512793   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:40.621] E0814 11:17:40.620744   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:41.309] E0814 11:17:41.308598   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:41.414] E0814 11:17:41.413441   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:41.515] E0814 11:17:41.514463   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:41.623] E0814 11:17:41.622393   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:42.110] I0814 11:17:42.109228   53109 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1565781442-29710
W0814 11:17:42.116] I0814 11:17:42.115446   53109 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1565781442-29710
W0814 11:17:42.310] E0814 11:17:42.310056   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:42.415] E0814 11:17:42.415062   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:42.517] E0814 11:17:42.516282   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:42.624] E0814 11:17:42.624070   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:43.312] E0814 11:17:43.311508   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:43.417] E0814 11:17:43.416771   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:43.518] E0814 11:17:43.517680   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:43.626] E0814 11:17:43.625705   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:44.313] E0814 11:17:44.312959   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:44.418] E0814 11:17:44.417731   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:44.520] E0814 11:17:44.519304   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:44.628] E0814 11:17:44.627573   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:17:45.041] +++ exit code: 0
I0814 11:17:45.078] Recording: run_secrets_test
I0814 11:17:45.078] Running command: run_secrets_test
I0814 11:17:45.101] 
I0814 11:17:45.104] +++ Running case: test-cmd.run_secrets_test 
I0814 11:17:45.106] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 57 lines ...
I0814 11:17:47.039] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0814 11:17:47.120] (Bsecret "test-secret" deleted
I0814 11:17:47.203] secret/test-secret created
I0814 11:17:47.297] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 11:17:47.386] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0814 11:17:47.464] (Bsecret "test-secret" deleted
W0814 11:17:47.565] E0814 11:17:45.314491   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:47.566] I0814 11:17:45.345712   70103 loader.go:375] Config loaded from file:  /tmp/tmp.2yLRPQmPLp/.kube/config
W0814 11:17:47.566] E0814 11:17:45.419342   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:47.566] E0814 11:17:45.520785   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:47.567] E0814 11:17:45.629070   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:47.567] E0814 11:17:46.315984   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:47.567] E0814 11:17:46.420969   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:47.567] E0814 11:17:46.522476   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:47.567] E0814 11:17:46.630622   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:47.568] E0814 11:17:47.317220   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:47.568] E0814 11:17:47.422520   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:47.568] E0814 11:17:47.523829   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:47.632] E0814 11:17:47.632246   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:17:47.733] secret/secret-string-data created
I0814 11:17:47.736] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 11:17:47.825] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 11:17:47.915] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0814 11:17:47.993] (Bsecret "secret-string-data" deleted
I0814 11:17:48.091] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:17:48.265] (Bsecret "test-secret" deleted
I0814 11:17:48.353] namespace "test-secrets" deleted
W0814 11:17:48.454] E0814 11:17:48.318804   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:48.455] E0814 11:17:48.424247   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:48.526] E0814 11:17:48.525509   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:48.634] E0814 11:17:48.633924   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:49.320] E0814 11:17:49.320283   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:49.426] E0814 11:17:49.425688   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:49.527] E0814 11:17:49.527073   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:49.636] E0814 11:17:49.635527   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:50.322] E0814 11:17:50.321826   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:50.435] E0814 11:17:50.434238   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:50.529] E0814 11:17:50.528981   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:50.638] E0814 11:17:50.637304   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:51.324] E0814 11:17:51.323611   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:51.437] E0814 11:17:51.437081   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:51.531] E0814 11:17:51.530580   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:51.639] E0814 11:17:51.639147   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:52.325] E0814 11:17:52.325011   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:52.439] E0814 11:17:52.438522   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:52.533] E0814 11:17:52.532284   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:52.641] E0814 11:17:52.641030   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:53.327] E0814 11:17:53.326544   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:53.440] E0814 11:17:53.439933   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:53.534] E0814 11:17:53.533708   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:17:53.635] +++ exit code: 0
I0814 11:17:53.635] Recording: run_configmap_tests
I0814 11:17:53.635] Running command: run_configmap_tests
I0814 11:17:53.635] 
I0814 11:17:53.635] +++ Running case: test-cmd.run_configmap_tests 
I0814 11:17:53.635] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:17:53.636] +++ command: run_configmap_tests
I0814 11:17:53.636] +++ [0814 11:17:53] Creating namespace namespace-1565781473-9827
I0814 11:17:53.636] namespace/namespace-1565781473-9827 created
I0814 11:17:53.682] Context "test" modified.
I0814 11:17:53.689] +++ [0814 11:17:53] Testing configmaps
W0814 11:17:53.789] E0814 11:17:53.642392   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:17:53.890] configmap/test-configmap created
I0814 11:17:53.979] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0814 11:17:54.059] (Bconfigmap "test-configmap" deleted
I0814 11:17:54.155] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I0814 11:17:54.231] (Bnamespace/test-configmaps created
I0814 11:17:54.322] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
... skipping 3 lines ...
I0814 11:17:54.644] configmap/test-binary-configmap created
I0814 11:17:54.737] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0814 11:17:54.827] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0814 11:17:55.076] (Bconfigmap "test-configmap" deleted
I0814 11:17:55.160] configmap "test-binary-configmap" deleted
I0814 11:17:55.245] namespace "test-configmaps" deleted
W0814 11:17:55.345] E0814 11:17:54.328058   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:55.346] E0814 11:17:54.441444   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:55.347] E0814 11:17:54.535319   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:55.347] E0814 11:17:54.643979   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:55.347] E0814 11:17:55.329513   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:55.444] E0814 11:17:55.443538   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:55.537] E0814 11:17:55.537007   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:55.646] E0814 11:17:55.645697   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:56.331] E0814 11:17:56.331237   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:56.445] E0814 11:17:56.444985   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:56.538] E0814 11:17:56.538108   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:56.647] E0814 11:17:56.647218   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:57.333] E0814 11:17:57.332964   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:57.447] E0814 11:17:57.446694   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:57.540] E0814 11:17:57.539582   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:57.649] E0814 11:17:57.648769   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:58.335] E0814 11:17:58.334558   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:58.449] E0814 11:17:58.448385   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:58.541] E0814 11:17:58.541228   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:58.651] E0814 11:17:58.650318   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:59.336] E0814 11:17:59.336125   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:59.450] E0814 11:17:59.449916   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:59.543] E0814 11:17:59.542986   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:17:59.652] E0814 11:17:59.651797   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:00.338] E0814 11:18:00.337581   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:00.439] +++ exit code: 0
I0814 11:18:00.439] Recording: run_client_config_tests
I0814 11:18:00.439] Running command: run_client_config_tests
I0814 11:18:00.439] 
I0814 11:18:00.439] +++ Running case: test-cmd.run_client_config_tests 
I0814 11:18:00.439] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:18:00.439] +++ command: run_client_config_tests
I0814 11:18:00.447] +++ [0814 11:18:00] Creating namespace namespace-1565781480-13178
I0814 11:18:00.522] namespace/namespace-1565781480-13178 created
I0814 11:18:00.595] Context "test" modified.
I0814 11:18:00.602] +++ [0814 11:18:00] Testing client config
I0814 11:18:00.681] Successful
I0814 11:18:00.681] message:error: stat missing: no such file or directory
I0814 11:18:00.682] has:missing: no such file or directory
I0814 11:18:00.752] Successful
I0814 11:18:00.752] message:error: stat missing: no such file or directory
I0814 11:18:00.752] has:missing: no such file or directory
I0814 11:18:00.825] Successful
I0814 11:18:00.826] message:error: stat missing: no such file or directory
I0814 11:18:00.826] has:missing: no such file or directory
I0814 11:18:00.904] Successful
I0814 11:18:00.905] message:Error in configuration: context was not found for specified context: missing-context
I0814 11:18:00.905] has:context was not found for specified context: missing-context
I0814 11:18:00.981] Successful
I0814 11:18:00.981] message:error: no server found for cluster "missing-cluster"
I0814 11:18:00.981] has:no server found for cluster "missing-cluster"
I0814 11:18:01.057] Successful
I0814 11:18:01.058] message:error: auth info "missing-user" does not exist
I0814 11:18:01.058] has:auth info "missing-user" does not exist
W0814 11:18:01.159] E0814 11:18:00.451420   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:01.160] E0814 11:18:00.544679   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:01.160] E0814 11:18:00.653518   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:01.261] Successful
I0814 11:18:01.261] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0814 11:18:01.262] has:error loading config file
I0814 11:18:01.296] Successful
I0814 11:18:01.296] message:error: stat missing-config: no such file or directory
I0814 11:18:01.297] has:no such file or directory
I0814 11:18:01.311] +++ exit code: 0
I0814 11:18:01.354] Recording: run_service_accounts_tests
I0814 11:18:01.354] Running command: run_service_accounts_tests
I0814 11:18:01.379] 
I0814 11:18:01.382] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0814 11:18:01.736] (Bnamespace/test-service-accounts created
I0814 11:18:01.836] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0814 11:18:01.916] (Bserviceaccount/test-service-account created
I0814 11:18:02.020] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0814 11:18:02.100] (Bserviceaccount "test-service-account" deleted
I0814 11:18:02.188] namespace "test-service-accounts" deleted
W0814 11:18:02.289] E0814 11:18:01.339332   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:02.289] E0814 11:18:01.453480   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:02.290] E0814 11:18:01.546399   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:02.290] E0814 11:18:01.655056   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:02.341] E0814 11:18:02.340982   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:02.455] E0814 11:18:02.454951   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:02.549] E0814 11:18:02.548295   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:02.657] E0814 11:18:02.656659   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:03.343] E0814 11:18:03.342471   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:03.457] E0814 11:18:03.456605   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:03.551] E0814 11:18:03.550426   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:03.659] E0814 11:18:03.658351   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:04.344] E0814 11:18:04.344076   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:04.458] E0814 11:18:04.458269   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:04.553] E0814 11:18:04.552263   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:04.661] E0814 11:18:04.660136   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:05.346] E0814 11:18:05.345773   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:05.461] E0814 11:18:05.460434   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:05.554] E0814 11:18:05.554068   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:05.663] E0814 11:18:05.662173   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:06.348] E0814 11:18:06.347736   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:06.464] E0814 11:18:06.463366   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:06.556] E0814 11:18:06.555443   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:06.664] E0814 11:18:06.663877   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:07.308] +++ exit code: 0
I0814 11:18:07.344] Recording: run_job_tests
I0814 11:18:07.345] Running command: run_job_tests
I0814 11:18:07.367] 
I0814 11:18:07.369] +++ Running case: test-cmd.run_job_tests 
I0814 11:18:07.372] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0814 11:18:08.168] Labels:                        run=pi
I0814 11:18:08.168] Annotations:                   <none>
I0814 11:18:08.168] Schedule:                      59 23 31 2 *
I0814 11:18:08.169] Concurrency Policy:            Allow
I0814 11:18:08.169] Suspend:                       False
I0814 11:18:08.169] Successful Job History Limit:  3
I0814 11:18:08.169] Failed Job History Limit:      1
I0814 11:18:08.170] Starting Deadline Seconds:     <unset>
I0814 11:18:08.170] Selector:                      <unset>
I0814 11:18:08.170] Parallelism:                   <unset>
I0814 11:18:08.171] Completions:                   <unset>
I0814 11:18:08.171] Pod Template:
I0814 11:18:08.171]   Labels:  run=pi
... skipping 32 lines ...
I0814 11:18:08.723]                 run=pi
I0814 11:18:08.723] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0814 11:18:08.723] Controlled By:  CronJob/pi
I0814 11:18:08.723] Parallelism:    1
I0814 11:18:08.723] Completions:    1
I0814 11:18:08.723] Start Time:     Wed, 14 Aug 2019 11:18:08 +0000
I0814 11:18:08.723] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0814 11:18:08.723] Pod Template:
I0814 11:18:08.723]   Labels:  controller-uid=b20233e5-73bd-4b8a-b284-0f91e30131d9
I0814 11:18:08.723]            job-name=test-job
I0814 11:18:08.723]            run=pi
I0814 11:18:08.724]   Containers:
I0814 11:18:08.724]    pi:
... skipping 15 lines ...
I0814 11:18:08.725]   Type    Reason            Age   From            Message
I0814 11:18:08.725]   ----    ------            ----  ----            -------
I0814 11:18:08.725]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-llrxz
I0814 11:18:08.807] job.batch "test-job" deleted
I0814 11:18:08.895] cronjob.batch "pi" deleted
I0814 11:18:08.981] namespace "test-jobs" deleted
W0814 11:18:09.082] E0814 11:18:07.349698   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:09.083] E0814 11:18:07.465410   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:09.083] E0814 11:18:07.556800   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:09.083] E0814 11:18:07.665101   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:09.084] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 11:18:09.084] E0814 11:18:08.351097   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:09.084] I0814 11:18:08.444430   53109 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"b20233e5-73bd-4b8a-b284-0f91e30131d9", APIVersion:"batch/v1", ResourceVersion:"1348", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-llrxz
W0814 11:18:09.085] E0814 11:18:08.467064   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:09.085] E0814 11:18:08.558826   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:09.085] E0814 11:18:08.666858   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:09.353] E0814 11:18:09.352777   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:09.469] E0814 11:18:09.469139   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:09.561] E0814 11:18:09.560450   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:09.668] E0814 11:18:09.668269   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:10.355] E0814 11:18:10.354599   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:10.471] E0814 11:18:10.470799   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:10.563] E0814 11:18:10.562358   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:10.671] E0814 11:18:10.670325   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:11.356] E0814 11:18:11.356041   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:11.473] E0814 11:18:11.472364   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:11.564] E0814 11:18:11.563917   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:11.672] E0814 11:18:11.671906   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:12.358] E0814 11:18:12.357725   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:12.474] E0814 11:18:12.473708   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:12.566] E0814 11:18:12.565598   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:12.674] E0814 11:18:12.673767   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:13.361] E0814 11:18:13.360007   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:13.476] E0814 11:18:13.475696   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:13.568] E0814 11:18:13.567893   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:13.677] E0814 11:18:13.676176   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:14.148] +++ exit code: 0
I0814 11:18:14.188] Recording: run_create_job_tests
I0814 11:18:14.189] Running command: run_create_job_tests
I0814 11:18:14.213] 
I0814 11:18:14.216] +++ Running case: test-cmd.run_create_job_tests 
I0814 11:18:14.219] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:18:14.222] +++ command: run_create_job_tests
I0814 11:18:14.238] +++ [0814 11:18:14] Creating namespace namespace-1565781494-31760
I0814 11:18:14.324] namespace/namespace-1565781494-31760 created
I0814 11:18:14.409] Context "test" modified.
I0814 11:18:14.507] job.batch/test-job created
W0814 11:18:14.609] E0814 11:18:14.361662   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:14.610] E0814 11:18:14.477608   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:14.610] I0814 11:18:14.507439   53109 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565781494-31760", Name:"test-job", UID:"57ddcd16-b34a-465f-8feb-e183e269427b", APIVersion:"batch/v1", ResourceVersion:"1365", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-j6kn9
W0814 11:18:14.611] E0814 11:18:14.569975   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:14.678] E0814 11:18:14.677719   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:14.779] create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
I0814 11:18:14.780] (Bjob.batch "test-job" deleted
I0814 11:18:14.817] job.batch/test-job-pi created
I0814 11:18:14.919] create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
I0814 11:18:15.006] (Bjob.batch "test-job-pi" deleted
W0814 11:18:15.108] I0814 11:18:14.811343   53109 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565781494-31760", Name:"test-job-pi", UID:"e46f2790-6652-4500-acaf-ae05b2568fc6", APIVersion:"batch/v1", ResourceVersion:"1372", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-m9845
... skipping 16 lines ...
I0814 11:18:15.701] namespace/namespace-1565781495-21279 created
I0814 11:18:15.796] Context "test" modified.
I0814 11:18:15.803] +++ [0814 11:18:15] Testing pod templates
I0814 11:18:15.906] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:18:16.076] (Bpodtemplate/nginx created
W0814 11:18:16.177] I0814 11:18:15.225041   53109 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565781494-31760", Name:"my-pi", UID:"c26a2e74-7d08-480b-ab9c-d8f87816ece2", APIVersion:"batch/v1", ResourceVersion:"1381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-xtvfc
W0814 11:18:16.177] E0814 11:18:15.363428   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:16.177] E0814 11:18:15.479306   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:16.177] E0814 11:18:15.571543   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:16.178] E0814 11:18:15.679238   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:16.178] I0814 11:18:16.070305   49637 controller.go:606] quota admission added evaluator for: podtemplates
I0814 11:18:16.278] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 11:18:16.279] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0814 11:18:16.279] nginx   nginx        nginx    name=nginx
W0814 11:18:16.379] E0814 11:18:16.365152   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:16.480] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 11:18:16.566] (Bpodtemplate "nginx" deleted
I0814 11:18:16.670] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:18:16.684] (B+++ exit code: 0
I0814 11:18:16.726] Recording: run_service_tests
I0814 11:18:16.726] Running command: run_service_tests
... skipping 2 lines ...
I0814 11:18:16.755] +++ working dir: /go/src/k8s.io/kubernetes
I0814 11:18:16.757] +++ command: run_service_tests
I0814 11:18:16.839] Context "test" modified.
I0814 11:18:16.847] +++ [0814 11:18:16] Testing kubectl(v1:services)
I0814 11:18:16.952] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 11:18:17.124] (Bservice/redis-master created
W0814 11:18:17.225] E0814 11:18:16.481479   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:17.225] E0814 11:18:16.572808   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:17.225] E0814 11:18:16.681135   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:17.326] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 11:18:17.364] (Bcore.sh:864: Successful describe services redis-master:
I0814 11:18:17.364] Name:              redis-master
I0814 11:18:17.365] Namespace:         default
I0814 11:18:17.365] Labels:            app=redis
I0814 11:18:17.365]                    role=master
... skipping 51 lines ...
I0814 11:18:17.683] Port:              <unset>  6379/TCP
I0814 11:18:17.683] TargetPort:        6379/TCP
I0814 11:18:17.683] Endpoints:         <none>
I0814 11:18:17.684] Session Affinity:  None
I0814 11:18:17.684] Events:            <none>
I0814 11:18:17.684] (B
W0814 11:18:17.784] E0814 11:18:17.366912   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:17.785] E0814 11:18:17.483167   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:17.785] E0814 11:18:17.574306   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:17.786] E0814 11:18:17.682998   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:17.886] Successful describe services:
I0814 11:18:17.887] Name:              kubernetes
I0814 11:18:17.887] Namespace:         default
I0814 11:18:17.887] Labels:            component=apiserver
I0814 11:18:17.887]                    provider=kubernetes
I0814 11:18:17.887] Annotations:       <none>
... skipping 177 lines ...
I0814 11:18:18.425]     role: padawan
I0814 11:18:18.425]   sessionAffinity: None
I0814 11:18:18.425]   type: ClusterIP
I0814 11:18:18.425] status:
I0814 11:18:18.425]   loadBalancer: {}
I0814 11:18:18.507] service/redis-master selector updated
W0814 11:18:18.608] E0814 11:18:18.368421   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:18.609] E0814 11:18:18.484923   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:18.609] E0814 11:18:18.576286   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:18.685] E0814 11:18:18.684835   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:18.786] core.sh:890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan:
I0814 11:18:18.787] (Bservice/redis-master selector updated
I0814 11:18:18.822] core.sh:894: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0814 11:18:18.916] (BapiVersion: v1
I0814 11:18:18.917] kind: Service
I0814 11:18:18.917] metadata:
... skipping 49 lines ...
I0814 11:18:18.926]   selector:
I0814 11:18:18.926]     role: padawan
I0814 11:18:18.926]   sessionAffinity: None
I0814 11:18:18.927]   type: ClusterIP
I0814 11:18:18.927] status:
I0814 11:18:18.927]   loadBalancer: {}
W0814 11:18:19.028] error: you must specify resources by --filename when --local is set.
W0814 11:18:19.028] Example resource specifications include:
W0814 11:18:19.028]    '-f rsrc.yaml'
W0814 11:18:19.029]    '--filename=rsrc.json'
I0814 11:18:19.129] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0814 11:18:19.323] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 11:18:19.419] (Bservice "redis-master" deleted
W0814 11:18:19.520] E0814 11:18:19.370408   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:19.520] E0814 11:18:19.487068   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:19.578] E0814 11:18:19.577828   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:19.679] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 11:18:19.679] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 11:18:19.821] (Bservice/redis-master created
W0814 11:18:19.922] E0814 11:18:19.686608   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:20.023] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 11:18:20.036] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 11:18:20.216] (Bservice/service-v1-test created
I0814 11:18:20.328] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0814 11:18:20.517] (Bservice/service-v1-test replaced
W0814 11:18:20.618] E0814 11:18:20.372302   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:20.619] E0814 11:18:20.489147   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:20.619] E0814 11:18:20.579724   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:20.689] E0814 11:18:20.688481   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:20.790] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0814 11:18:20.790] (Bservice "redis-master" deleted
I0814 11:18:20.829] service "service-v1-test" deleted
I0814 11:18:20.943] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 11:18:21.048] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 11:18:21.225] (Bservice/redis-master created
W0814 11:18:21.375] E0814 11:18:21.374520   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:21.476] service/redis-slave created
I0814 11:18:21.515] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0814 11:18:21.618] (BSuccessful
I0814 11:18:21.619] message:NAME           RSRC
I0814 11:18:21.619] kubernetes     144
I0814 11:18:21.619] redis-master   1416
I0814 11:18:21.619] redis-slave    1419
I0814 11:18:21.619] has:redis-master
I0814 11:18:21.719] core.sh:979: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0814 11:18:21.812] (Bservice "redis-master" deleted
I0814 11:18:21.821] service "redis-slave" deleted
W0814 11:18:21.923] E0814 11:18:21.490631   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:21.924] E0814 11:18:21.581618   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:21.924] E0814 11:18:21.690333   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:22.024] core.sh:986: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 11:18:22.031] (Bcore.sh:990: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 11:18:22.125] (Bservice/beep-boop created
I0814 11:18:22.233] core.sh:994: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I0814 11:18:22.335] (Bcore.sh:998: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I0814 11:18:22.426] (Bservice "beep-boop" deleted
W0814 11:18:22.527] E0814 11:18:22.376427   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:22.528] E0814 11:18:22.492438   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:22.584] E0814 11:18:22.584092   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:22.685] core.sh:1005: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 11:18:22.686] (Bcore.sh:1009: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:18:22.745] (Bservice/testmetadata created
I0814 11:18:22.746] deployment.apps/testmetadata created
W0814 11:18:22.847] E0814 11:18:22.692232   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:22.847] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 11:18:22.848] I0814 11:18:22.726247   53109 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"7bbfab1a-091e-40e7-b7a4-36c2ae70daca", APIVersion:"apps/v1", ResourceVersion:"1431", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-6cdd84c77d to 2
W0814 11:18:22.848] I0814 11:18:22.731443   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"63dd2300-664c-4833-a08a-cd7e74131a60", APIVersion:"apps/v1", ResourceVersion:"1432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-sz22g
W0814 11:18:22.848] I0814 11:18:22.736076   53109 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"63dd2300-664c-4833-a08a-cd7e74131a60", APIVersion:"apps/v1", ResourceVersion:"1432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-gjtgj
I0814 11:18:22.949] core.sh:1013: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
I0814 11:18:22.977] (Bcore.sh:1014: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
... skipping 37 lines ...
I0814 11:18:25.243] +++ [0814 11:18:25] Creating namespace namespace-1565781505-16539
I0814 11:18:25.320] namespace/namespace-1565781505-16539 created
I0814 11:18:25.395] Context "test" modified.
I0814 11:18:25.403] +++ [0814 11:18:25] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
I0814 11:18:25.501] apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:18:25.679] (Bdaemonset.apps/bind created
W0814 11:18:25.780] E0814 11:18:23.377545   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:25.781] E0814 11:18:23.494349   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:25.781] E0814 11:18:23.585758   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:25.781] E0814 11:18:23.693837   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:25.781] I0814 11:18:23.887651   49637 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0814 11:18:25.782] I0814 11:18:23.899812   49637 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0814 11:18:25.782] E0814 11:18:24.379160   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:25.782] E0814 11:18:24.495989   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:25.782] E0814 11:18:24.587337   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:25.782] E0814 11:18:24.695449   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:25.783] E0814 11:18:25.380983   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:25.783] E0814 11:18:25.497427   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:25.783] E0814 11:18:25.588962   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:25.783] E0814 11:18:25.696999   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:25.884] apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1565781505-16539"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0814 11:18:25.885]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
I0814 11:18:25.895] (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
I0814 11:18:25.997] apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0814 11:18:26.093] (Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0814 11:18:26.267] (Bdaemonset.apps/bind configured
... skipping 15 lines ...
I0814 11:18:26.763]   Volumes:	<none>
I0814 11:18:26.763]  (dry run)
I0814 11:18:26.862] apps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0814 11:18:26.959] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0814 11:18:27.056] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0814 11:18:27.164] (Bdaemonset.apps/bind rolled back
W0814 11:18:27.265] E0814 11:18:26.382511   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:27.266] E0814 11:18:26.498829   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:27.266] E0814 11:18:26.591262   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:27.266] E0814 11:18:26.698630   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 11:18:27.367] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0814 11:18:27.377] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0814 11:18:27.481] (BSuccessful
I0814 11:18:27.481] message:error: unable to find specified revision 1000000 in history
I0814 11:18:27.481] has:unable to find specified revision
I0814 11:18:27.578] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0814 11:18:27.679] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0814 11:18:27.786] (Bdaemonset.apps/bind rolled back
I0814 11:18:27.891] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0814 11:18:27.992] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 13 lines ...
I0814 11:18:28.527] core.sh:1046: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:18:28.684] (Breplicationcontroller/frontend created
I0814 11:18:28.779] replicationcontroller "frontend" deleted
I0814 11:18:28.885] core.sh:1051: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:18:28.981] (Bcore.sh:1055: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 11:18:29.145] (Breplicationcontroller/frontend created
W0814 11:18:29.246] E0814 11:18:27.383995   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:29.246] E0814 11:18:27.500569   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:29.247] E0814 11:18:27.593138   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:29.247] E0814 11:18:27.700300   53109 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 11:18:29.252] E0814 11:18:27.809565   53109 daemon_controller.go:302] namespace-1565781505-16539/bind