This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2470 succeeded
Started2019-08-13 19:44
Elapsed27m21s
Revision
Buildergke-prow-ssd-pool-1a225945-d0kf
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/e0f3a8aa-06f4-43e9-99b8-aada28d749bd/targets/test'}}
podab510abe-be02-11e9-a0ae-ea43db2f3479
resultstorehttps://source.cloud.google.com/results/invocations/e0f3a8aa-06f4-43e9-99b8-aada28d749bd/targets/test
infra-commitad41c697d
podab510abe-be02-11e9-a0ae-ea43db2f3479
repok8s.io/kubernetes
repo-commitf22b67dd8f9f4c1ce7434a6432e4f952ef36ea32
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptWithPermitPlugin 1m4s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptWithPermitPlugin$
=== RUN   TestPreemptWithPermitPlugin
I0813 20:07:24.195368  110484 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0813 20:07:24.195401  110484 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0813 20:07:24.195413  110484 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0813 20:07:24.195424  110484 master.go:234] Using reconciler: 
I0813 20:07:24.197897  110484 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.198051  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.198109  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.198176  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.198262  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.198806  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.199006  110484 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0813 20:07:24.199040  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.199044  110484 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.199146  110484 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0813 20:07:24.199220  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.199228  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.199252  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.199351  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.199808  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.199946  110484 store.go:1342] Monitoring events count at <storage-prefix>//events
I0813 20:07:24.199982  110484 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.200043  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.200051  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.200075  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.200119  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.200157  110484 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0813 20:07:24.200298  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.200556  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.200654  110484 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0813 20:07:24.200684  110484 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.200746  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.200757  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.200789  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.200846  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.200876  110484 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0813 20:07:24.201013  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.201182  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.201240  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.201341  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.201341  110484 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0813 20:07:24.201423  110484 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0813 20:07:24.201508  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.201524  110484 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.201589  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.201617  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.201643  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.201688  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.201929  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.202042  110484 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0813 20:07:24.202195  110484 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.202278  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.202295  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.202329  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.202382  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.202421  110484 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0813 20:07:24.202587  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.203893  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.203900  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.204045  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.204044  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.204701  110484 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0813 20:07:24.204911  110484 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.204972  110484 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0813 20:07:24.204993  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.205002  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.205032  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.205095  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.206011  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.206011  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.206076  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.206267  110484 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0813 20:07:24.206290  110484 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0813 20:07:24.206441  110484 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.206509  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.206519  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.206554  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.206643  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.206967  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.207342  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.208171  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.208298  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.208476  110484 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0813 20:07:24.208639  110484 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0813 20:07:24.209387  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.208818  110484 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.210390  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.210498  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.210649  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.210795  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.211636  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.211797  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.211988  110484 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0813 20:07:24.212084  110484 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0813 20:07:24.212191  110484 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.212250  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.212258  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.212281  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.213166  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.213269  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.214250  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.214321  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.214378  110484 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0813 20:07:24.214506  110484 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.214590  110484 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0813 20:07:24.214668  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.214681  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.214707  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.214815  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.215219  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.215383  110484 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0813 20:07:24.215552  110484 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.215627  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.215636  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.215647  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.215679  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.215738  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.215984  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.216100  110484 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0813 20:07:24.216245  110484 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.216310  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.216319  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.216346  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.216400  110484 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0813 20:07:24.216413  110484 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0813 20:07:24.216571  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.216574  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.216796  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.216799  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.216853  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.217047  110484 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0813 20:07:24.217093  110484 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0813 20:07:24.217212  110484 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.217289  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.217301  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.217344  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.217446  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.218251  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.218370  110484 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0813 20:07:24.218399  110484 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.218667  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.218729  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.218774  110484 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0813 20:07:24.218868  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.218881  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.218912  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.218983  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.219203  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.219274  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.219285  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.219294  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.219322  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.219365  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.220676  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.220689  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.220860  110484 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.220929  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.220938  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.220965  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.221012  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.221053  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.221108  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.221309  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.221431  110484 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0813 20:07:24.221511  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.221767  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.221860  110484 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0813 20:07:24.221997  110484 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.222169  110484 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.222757  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.222832  110484 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.223473  110484 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.224751  110484 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.225477  110484 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.226018  110484 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.226191  110484 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.226417  110484 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.226936  110484 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.227586  110484 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.227842  110484 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.228772  110484 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.229145  110484 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.229710  110484 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.229970  110484 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.230676  110484 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.230931  110484 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.231172  110484 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.231332  110484 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.231559  110484 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.231742  110484 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.231900  110484 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.232706  110484 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.233060  110484 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.233896  110484 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.234823  110484 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.235312  110484 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.235711  110484 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.236642  110484 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.237093  110484 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.238010  110484 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.238908  110484 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.240024  110484 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.241035  110484 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.241516  110484 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.241766  110484 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0813 20:07:24.241872  110484 master.go:434] Enabling API group "authentication.k8s.io".
I0813 20:07:24.241960  110484 master.go:434] Enabling API group "authorization.k8s.io".
I0813 20:07:24.242228  110484 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.242485  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.242578  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.242757  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.242993  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.244506  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.244995  110484 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0813 20:07:24.245165  110484 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0813 20:07:24.245287  110484 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.245838  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.246012  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.246155  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.245355  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.246392  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.247133  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.247526  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.247384  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.248117  110484 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0813 20:07:24.248492  110484 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.248713  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.248803  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.248878  110484 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0813 20:07:24.248903  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.248976  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.249558  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.249662  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.249722  110484 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0813 20:07:24.249744  110484 master.go:434] Enabling API group "autoscaling".
I0813 20:07:24.249887  110484 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0813 20:07:24.249905  110484 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.250016  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.250026  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.250062  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.250187  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.250439  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.250556  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.250573  110484 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0813 20:07:24.250767  110484 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.250791  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.250832  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.250842  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.250872  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.250934  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.250998  110484 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0813 20:07:24.251254  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.251376  110484 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0813 20:07:24.251392  110484 master.go:434] Enabling API group "batch".
I0813 20:07:24.251543  110484 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.251588  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.251625  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.251650  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.251655  110484 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0813 20:07:24.251672  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.251690  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.251823  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.252069  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.252138  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.252328  110484 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0813 20:07:24.252386  110484 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0813 20:07:24.252544  110484 master.go:434] Enabling API group "certificates.k8s.io".
I0813 20:07:24.252778  110484 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.252845  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.252854  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.252882  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.252931  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.253157  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.253266  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.253331  110484 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0813 20:07:24.253461  110484 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0813 20:07:24.253951  110484 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.254017  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.254027  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.254054  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.254103  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.255028  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.255139  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.255244  110484 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0813 20:07:24.255257  110484 master.go:434] Enabling API group "coordination.k8s.io".
I0813 20:07:24.255314  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.255397  110484 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.255461  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.255471  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.255512  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.255562  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.255612  110484 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0813 20:07:24.255767  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.256061  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.256159  110484 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0813 20:07:24.256177  110484 master.go:434] Enabling API group "extensions".
I0813 20:07:24.256330  110484 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.256396  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.256408  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.256441  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.256476  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.256502  110484 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0813 20:07:24.256667  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.256949  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.260833  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.261004  110484 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0813 20:07:24.261169  110484 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.261259  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.261270  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.261340  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.261452  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.261505  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.261535  110484 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0813 20:07:24.262013  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.262262  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.262520  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.262661  110484 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0813 20:07:24.262686  110484 master.go:434] Enabling API group "networking.k8s.io".
I0813 20:07:24.262728  110484 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.262796  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.262806  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.262837  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.262877  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.262907  110484 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0813 20:07:24.263088  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.263317  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.263384  110484 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0813 20:07:24.263668  110484 master.go:434] Enabling API group "node.k8s.io".
I0813 20:07:24.263806  110484 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.263861  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.263869  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.263892  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.263922  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.263981  110484 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0813 20:07:24.264149  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.264329  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.264413  110484 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0813 20:07:24.264521  110484 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.264568  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.264576  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.264634  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.264679  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.264709  110484 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0813 20:07:24.264885  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.265875  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.265962  110484 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0813 20:07:24.265973  110484 master.go:434] Enabling API group "policy".
I0813 20:07:24.265999  110484 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.266040  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.266046  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.266094  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.266119  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.266139  110484 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0813 20:07:24.266233  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.266417  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.266454  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.266529  110484 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0813 20:07:24.266574  110484 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0813 20:07:24.266722  110484 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.266800  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.266811  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.266871  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.266997  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.267252  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.267393  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.267432  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.267473  110484 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0813 20:07:24.267493  110484 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.267532  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.267539  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.267557  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.267581  110484 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0813 20:07:24.267617  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.267852  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.267969  110484 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0813 20:07:24.268103  110484 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.268160  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.268172  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.268204  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.268317  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.268380  110484 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0813 20:07:24.268657  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.268925  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.269359  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.269810  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.270022  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.270069  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.270255  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.270371  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.270617  110484 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0813 20:07:24.270670  110484 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.270741  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.270751  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.270781  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.270847  110484 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0813 20:07:24.271071  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.271128  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.271311  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.271402  110484 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0813 20:07:24.271415  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.271665  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.271738  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.271888  110484 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.271956  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.271966  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.272043  110484 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0813 20:07:24.273016  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.273542  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.273676  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.274184  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.274721  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.274994  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.275222  110484 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0813 20:07:24.275318  110484 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.275331  110484 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0813 20:07:24.276271  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.276970  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.277088  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.277193  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.277338  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.278094  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.278287  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.278556  110484 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0813 20:07:24.278713  110484 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0813 20:07:24.279561  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.280733  110484 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.280932  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.281046  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.281151  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.281351  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.282191  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.282393  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.282666  110484 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0813 20:07:24.282829  110484 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0813 20:07:24.283073  110484 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0813 20:07:24.284845  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.286673  110484 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.286983  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.287103  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.287252  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.287403  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.287906  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.288036  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.288275  110484 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0813 20:07:24.288403  110484 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0813 20:07:24.288470  110484 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.288575  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.288585  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.288638  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.288687  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.289451  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.289482  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.289487  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.289810  110484 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0813 20:07:24.289839  110484 master.go:434] Enabling API group "scheduling.k8s.io".
I0813 20:07:24.289961  110484 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0813 20:07:24.289996  110484 master.go:423] Skipping disabled API group "settings.k8s.io".
I0813 20:07:24.290165  110484 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.290282  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.290293  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.290369  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.290446  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.291132  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.291241  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.291285  110484 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0813 20:07:24.291364  110484 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0813 20:07:24.291437  110484 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.291522  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.291534  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.291559  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.291565  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.291697  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.292540  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.292544  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.292656  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.292788  110484 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0813 20:07:24.292825  110484 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.292849  110484 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0813 20:07:24.292889  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.292900  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.292940  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.293112  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.293323  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.293400  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.293409  110484 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0813 20:07:24.293435  110484 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.293494  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.293503  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.293549  110484 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0813 20:07:24.293821  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.293976  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.294230  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.294651  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.294705  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.294746  110484 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0813 20:07:24.294820  110484 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0813 20:07:24.294872  110484 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.294922  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.294929  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.294959  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.295015  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.295245  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.295403  110484 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0813 20:07:24.295541  110484 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.295640  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.295650  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.295679  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.295726  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.295780  110484 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0813 20:07:24.295952  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.296369  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.296413  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.296467  110484 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0813 20:07:24.296484  110484 master.go:434] Enabling API group "storage.k8s.io".
I0813 20:07:24.296487  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.296655  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.296638  110484 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.296677  110484 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0813 20:07:24.296715  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.296725  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.296783  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.296828  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.297099  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.297123  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.297260  110484 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0813 20:07:24.297279  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.297303  110484 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0813 20:07:24.297402  110484 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.297474  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.297485  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.297512  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.297555  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.297768  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.297784  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.297872  110484 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0813 20:07:24.297947  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.297975  110484 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0813 20:07:24.298003  110484 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.298067  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.298077  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.298131  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.298193  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.299129  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.299326  110484 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0813 20:07:24.299377  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.299473  110484 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.299522  110484 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0813 20:07:24.299582  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.299624  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.299654  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.299722  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.299900  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.299937  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.300047  110484 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0813 20:07:24.300091  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.300125  110484 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0813 20:07:24.300174  110484 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.300240  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.300250  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.300314  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.300389  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.300882  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.300981  110484 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0813 20:07:24.301001  110484 master.go:434] Enabling API group "apps".
I0813 20:07:24.301001  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.301029  110484 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.301044  110484 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0813 20:07:24.301091  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.301101  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.301129  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.301244  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.301454  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.301538  110484 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0813 20:07:24.301561  110484 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.301629  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.301639  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.301698  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.301731  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.301739  110484 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0813 20:07:24.301831  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.302011  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.302080  110484 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0813 20:07:24.302100  110484 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.302145  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.302153  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.302170  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.302194  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.302241  110484 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0813 20:07:24.302421  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.302631  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.302679  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.302710  110484 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0813 20:07:24.302743  110484 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0813 20:07:24.302736  110484 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.302791  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.302800  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.302826  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.302896  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.302958  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.303090  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.303103  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.303156  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.303205  110484 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0813 20:07:24.303226  110484 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0813 20:07:24.303257  110484 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.303282  110484 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0813 20:07:24.303480  110484 client.go:354] parsed scheme: ""
I0813 20:07:24.303495  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:24.303522  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:24.303564  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.303799  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:24.303876  110484 store.go:1342] Monitoring events count at <storage-prefix>//events
I0813 20:07:24.303886  110484 master.go:434] Enabling API group "events.k8s.io".
I0813 20:07:24.303935  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:24.303972  110484 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0813 20:07:24.304064  110484 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.304189  110484 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.304320  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.304426  110484 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.304572  110484 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.304824  110484 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.304935  110484 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.305237  110484 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.305360  110484 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.305455  110484 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.305536  110484 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.306167  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.306359  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.306509  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.306678  110484 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.306905  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.307184  110484 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.307207  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.308304  110484 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.308511  110484 watch_cache.go:405] Replace watchCache (rev: 29055) 
I0813 20:07:24.308720  110484 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.309582  110484 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.309781  110484 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.310356  110484 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.310556  110484 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.311148  110484 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.311490  110484 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0813 20:07:24.311558  110484 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0813 20:07:24.312163  110484 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.312275  110484 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.312414  110484 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.313160  110484 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.314053  110484 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.314953  110484 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.315243  110484 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.315987  110484 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.316585  110484 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.316962  110484 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.317956  110484 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0813 20:07:24.318145  110484 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0813 20:07:24.318960  110484 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.319384  110484 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.320225  110484 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.320948  110484 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.321459  110484 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.322125  110484 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.322800  110484 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.323493  110484 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.324044  110484 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.324814  110484 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.325459  110484 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0813 20:07:24.325642  110484 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0813 20:07:24.326209  110484 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.326833  110484 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0813 20:07:24.326989  110484 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0813 20:07:24.327622  110484 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.328275  110484 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.328565  110484 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.329204  110484 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.329719  110484 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.330325  110484 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.330924  110484 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0813 20:07:24.331161  110484 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0813 20:07:24.332523  110484 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.333204  110484 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.333482  110484 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.334335  110484 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.334571  110484 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.334809  110484 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.335463  110484 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.335707  110484 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.335969  110484 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.336675  110484 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.336912  110484 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.337158  110484 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0813 20:07:24.337225  110484 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0813 20:07:24.337235  110484 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0813 20:07:24.337963  110484 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.338511  110484 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.339098  110484 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.339686  110484 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.340280  110484 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7df8d06f-84e8-49ed-93fa-bb09a28c5a43", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 20:07:24.342629  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.342698  110484 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0813 20:07:24.342706  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.342715  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.342723  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.342731  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.342796  110484 httplog.go:90] GET /healthz: (277.184µs) 0 [Go-http-client/1.1 127.0.0.1:51200]
I0813 20:07:24.344044  110484 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.577822ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51202]
I0813 20:07:24.347007  110484 httplog.go:90] GET /api/v1/services: (1.512795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51202]
I0813 20:07:24.351341  110484 httplog.go:90] GET /api/v1/services: (1.215584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51202]
I0813 20:07:24.353929  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.353969  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.353983  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.353993  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.354002  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.354030  110484 httplog.go:90] GET /healthz: (248.042µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51202]
I0813 20:07:24.356009  110484 httplog.go:90] GET /api/v1/services: (1.452893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51202]
I0813 20:07:24.356085  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.679998ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51200]
I0813 20:07:24.357536  110484 httplog.go:90] GET /api/v1/services: (1.949793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:24.358069  110484 httplog.go:90] POST /api/v1/namespaces: (1.532173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51200]
I0813 20:07:24.359713  110484 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.027382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:24.361496  110484 httplog.go:90] POST /api/v1/namespaces: (1.303974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:24.362780  110484 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (902.493µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:24.364580  110484 httplog.go:90] POST /api/v1/namespaces: (1.475728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
E0813 20:07:24.378662  110484 factory.go:599] Error getting pod permit-pluginadaef044-33e8-4f36-8746-79dae8b9a84d/test-pod for retry: Get http://127.0.0.1:39477/api/v1/namespaces/permit-pluginadaef044-33e8-4f36-8746-79dae8b9a84d/pods/test-pod: dial tcp 127.0.0.1:39477: connect: connection refused; retrying...
I0813 20:07:24.443656  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.443707  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.443721  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.443732  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.443742  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.443793  110484 httplog.go:90] GET /healthz: (326.252µs) 0 [Go-http-client/1.1 127.0.0.1:51204]
I0813 20:07:24.454901  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.454953  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.454968  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.454977  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.454986  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.455024  110484 httplog.go:90] GET /healthz: (271.635µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:24.543536  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.543578  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.543612  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.543623  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.543631  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.543670  110484 httplog.go:90] GET /healthz: (280.145µs) 0 [Go-http-client/1.1 127.0.0.1:51204]
I0813 20:07:24.554840  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.554866  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.554874  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.554881  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.554887  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.554912  110484 httplog.go:90] GET /healthz: (193.938µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:24.643474  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.643507  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.643519  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.643526  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.643531  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.643558  110484 httplog.go:90] GET /healthz: (245.931µs) 0 [Go-http-client/1.1 127.0.0.1:51204]
I0813 20:07:24.655275  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.655309  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.655323  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.655333  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.655341  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.655376  110484 httplog.go:90] GET /healthz: (250.098µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:24.743639  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.743678  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.743691  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.743701  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.743709  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.743743  110484 httplog.go:90] GET /healthz: (338.371µs) 0 [Go-http-client/1.1 127.0.0.1:51204]
I0813 20:07:24.754772  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.754817  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.754829  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.754839  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.754852  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.754896  110484 httplog.go:90] GET /healthz: (272.264µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:24.845410  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.845452  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.845465  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.845473  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.845482  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.845514  110484 httplog.go:90] GET /healthz: (256.507µs) 0 [Go-http-client/1.1 127.0.0.1:51204]
I0813 20:07:24.854698  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.854739  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.854753  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.854763  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.854771  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.854804  110484 httplog.go:90] GET /healthz: (276.973µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:24.943462  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.943501  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.943513  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.943524  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.943532  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.943570  110484 httplog.go:90] GET /healthz: (259.641µs) 0 [Go-http-client/1.1 127.0.0.1:51204]
I0813 20:07:24.954851  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:24.954884  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:24.954898  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:24.954909  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:24.954944  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:24.954979  110484 httplog.go:90] GET /healthz: (293.357µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:25.043635  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:25.043667  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.043677  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:25.043684  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:25.043691  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:25.043734  110484 httplog.go:90] GET /healthz: (278.98µs) 0 [Go-http-client/1.1 127.0.0.1:51204]
I0813 20:07:25.054771  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:25.054804  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.054817  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:25.054827  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:25.054835  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:25.054864  110484 httplog.go:90] GET /healthz: (239.289µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:25.143516  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:25.143555  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.143568  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:25.143578  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:25.143587  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:25.143649  110484 httplog.go:90] GET /healthz: (276.157µs) 0 [Go-http-client/1.1 127.0.0.1:51204]
I0813 20:07:25.154843  110484 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 20:07:25.154885  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.154901  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:25.154911  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:25.154919  110484 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:25.154948  110484 httplog.go:90] GET /healthz: (300.68µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:25.197266  110484 client.go:354] parsed scheme: ""
I0813 20:07:25.197307  110484 client.go:354] scheme "" not registered, fallback to default scheme
I0813 20:07:25.197368  110484 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 20:07:25.197469  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:25.197903  110484 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 20:07:25.197985  110484 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 20:07:25.244768  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.244804  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:25.244816  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:25.244825  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:25.244873  110484 httplog.go:90] GET /healthz: (1.483587ms) 0 [Go-http-client/1.1 127.0.0.1:51204]
I0813 20:07:25.255843  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.255876  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:25.255887  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:25.255896  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:25.255941  110484 httplog.go:90] GET /healthz: (1.230862ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:25.345171  110484 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.677201ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51202]
I0813 20:07:25.345172  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.488292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51204]
I0813 20:07:25.345340  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.345361  110484 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 20:07:25.345372  110484 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 20:07:25.345381  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 20:07:25.345407  110484 httplog.go:90] GET /healthz: (1.902373ms) 0 [Go-http-client/1.1 127.0.0.1:51224]
I0813 20:07:25.345412  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.412474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.347358  110484 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.37642ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.347358  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.441023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51202]
I0813 20:07:25.347896  110484 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.631607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.348333  110484 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0813 20:07:25.348967  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (796.252µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51202]
I0813 20:07:25.349551  110484 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (953.146µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.350860  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.146412ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51202]
I0813 20:07:25.351343  110484 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.464013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.351650  110484 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0813 20:07:25.351663  110484 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0813 20:07:25.352199  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.044442ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51202]
I0813 20:07:25.352713  110484 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (4.951835ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.353474  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (965.264µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51202]
I0813 20:07:25.355269  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.277925ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.356806  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.356831  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:25.356863  110484 httplog.go:90] GET /healthz: (1.937667ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.357034  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.487855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.358182  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (758.553µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.359727  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.153945ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.362477  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.308799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.362731  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0813 20:07:25.363782  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (826.523µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.366076  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.961443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.366444  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0813 20:07:25.367959  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.275309ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.370252  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.830552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.370459  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0813 20:07:25.371704  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.052341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.373731  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.653987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.373916  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0813 20:07:25.375062  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (943.583µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.377317  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.936241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.377548  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0813 20:07:25.378692  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (921.838µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.380963  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.722397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.382585  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0813 20:07:25.384471  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.492317ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.386714  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.689697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.386912  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0813 20:07:25.388076  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (979.449µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.391580  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.943104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.391914  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0813 20:07:25.393137  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.012457ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.395833  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.998449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.396119  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0813 20:07:25.397469  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.157195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.400149  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.168714ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.400415  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0813 20:07:25.401873  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.271318ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.404081  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.830734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.404307  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0813 20:07:25.405677  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.120829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.408103  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.875411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.408443  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0813 20:07:25.409559  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (871.893µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.411696  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.697251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.411924  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0813 20:07:25.413484  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.31986ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.415263  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.340063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.415494  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0813 20:07:25.416548  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (834.477µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.418557  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.56518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.418831  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0813 20:07:25.419929  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (825.967µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.422202  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.662302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.422556  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0813 20:07:25.424253  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.465898ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.426347  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.706179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.426748  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0813 20:07:25.428265  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.25053ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.430286  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.5402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.430483  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0813 20:07:25.433068  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (2.233132ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.435276  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.550854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.435585  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0813 20:07:25.436943  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.076273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.438865  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.464268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.439097  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0813 20:07:25.440452  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.003855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.443166  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.019011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.443457  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0813 20:07:25.444062  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.444093  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:25.444156  110484 httplog.go:90] GET /healthz: (858.696µs) 0 [Go-http-client/1.1 127.0.0.1:51224]
I0813 20:07:25.445018  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.252149ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.446725  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.392776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.446919  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0813 20:07:25.448500  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.087298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.450655  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.671184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.450850  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0813 20:07:25.452158  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.09613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.454342  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.572884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.454506  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0813 20:07:25.455638  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (951.785µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.457190  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.457212  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:25.457252  110484 httplog.go:90] GET /healthz: (1.278411ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.458704  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.282192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.458944  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0813 20:07:25.460276  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.057582ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.463782  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.948474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.464078  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0813 20:07:25.465515  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.121965ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.467578  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.506876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.468102  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0813 20:07:25.469378  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (991µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.471786  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.901873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.471989  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0813 20:07:25.472869  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (751.352µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.474700  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.480329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.474866  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0813 20:07:25.475896  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (850.563µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.477844  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.455301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.478157  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0813 20:07:25.479285  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (930.025µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.481963  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.144032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.482443  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0813 20:07:25.483779  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (978.078µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.486180  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.896651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.486428  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0813 20:07:25.487652  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (917.474µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.489872  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.704623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.490238  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0813 20:07:25.491374  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (850.835µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.493234  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.389081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.493476  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0813 20:07:25.494811  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (979.31µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.497353  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.911809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.497665  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0813 20:07:25.499098  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.092147ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.504792  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.33031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.505089  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0813 20:07:25.506425  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.038982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.509189  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.170296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.509514  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0813 20:07:25.511540  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.648623ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.514143  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.879041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.514414  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0813 20:07:25.515726  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.00566ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.517889  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.599715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.518242  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0813 20:07:25.519550  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (997.655µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.522527  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.433547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.522856  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0813 20:07:25.524277  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.130351ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.526341  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.518952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.526701  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0813 20:07:25.528587  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.619597ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.530961  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.794779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.531317  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0813 20:07:25.532865  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.174727ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.535090  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.713264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.535383  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0813 20:07:25.536900  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.157261ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.539193  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.785224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.539415  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0813 20:07:25.541218  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.488835ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.544305  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.195989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.544495  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.544904  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:25.545050  110484 httplog.go:90] GET /healthz: (1.790294ms) 0 [Go-http-client/1.1 127.0.0.1:51224]
I0813 20:07:25.545457  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0813 20:07:25.546753  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.035611ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.548949  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.685471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.549208  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0813 20:07:25.550320  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (953.162µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.552740  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.860251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.553088  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0813 20:07:25.555437  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (2.050797ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.555901  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.555998  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:25.556152  110484 httplog.go:90] GET /healthz: (1.646976ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.557867  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.790695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.558116  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0813 20:07:25.559373  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.063958ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.561984  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.173475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.562301  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0813 20:07:25.563570  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (920.974µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.566167  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.982771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.566357  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0813 20:07:25.567850  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.251961ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.569858  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.49649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.570075  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0813 20:07:25.585046  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.585401ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.605308  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.213224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.605631  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0813 20:07:25.624687  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.600285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.645019  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.645055  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:25.645104  110484 httplog.go:90] GET /healthz: (1.879832ms) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:25.645844  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.653649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.646088  110484 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0813 20:07:25.655820  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.656034  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:25.656465  110484 httplog.go:90] GET /healthz: (1.645384ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.664879  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.577902ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.686446  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.197747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.686938  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0813 20:07:25.704680  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.526869ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.725676  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.484086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.726107  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0813 20:07:25.744905  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.744956  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:25.744993  110484 httplog.go:90] GET /healthz: (974.647µs) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:25.745567  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (2.37175ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.756538  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.756580  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:25.756662  110484 httplog.go:90] GET /healthz: (1.890577ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.765738  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.56542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.766252  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0813 20:07:25.785290  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.382758ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.805508  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.44186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.805813  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0813 20:07:25.824372  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.315537ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.844637  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.844673  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:25.844715  110484 httplog.go:90] GET /healthz: (1.376174ms) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:25.845148  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.110366ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.845411  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0813 20:07:25.855875  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.855944  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:25.855995  110484 httplog.go:90] GET /healthz: (1.370516ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.864712  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.499143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.885353  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.332615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.885889  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0813 20:07:25.904567  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.239252ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.925978  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.842732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.926290  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0813 20:07:25.944430  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.290026ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:25.945503  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.945538  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:25.945578  110484 httplog.go:90] GET /healthz: (1.166371ms) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:25.955699  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:25.955734  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:25.955780  110484 httplog.go:90] GET /healthz: (1.110983ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.965398  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.308945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:25.965740  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0813 20:07:25.984335  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.239626ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.006047  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.939839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.006329  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0813 20:07:26.024642  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.271737ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.044897  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.808061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.045043  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.045065  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.045103  110484 httplog.go:90] GET /healthz: (1.090137ms) 0 [Go-http-client/1.1 127.0.0.1:51224]
I0813 20:07:26.045193  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0813 20:07:26.056356  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.056392  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.056473  110484 httplog.go:90] GET /healthz: (1.08793ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.064780  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.535289ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.085301  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.195054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.085579  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0813 20:07:26.104649  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.488734ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.126094  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.87223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.126447  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0813 20:07:26.144242  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.144271  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.144314  110484 httplog.go:90] GET /healthz: (1.017562ms) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:26.145115  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.890808ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.155962  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.155994  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.156034  110484 httplog.go:90] GET /healthz: (1.255217ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.165300  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.1439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.165673  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0813 20:07:26.184560  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.344463ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.206393  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.21683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.206698  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0813 20:07:26.224656  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.418323ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.244948  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.244983  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.245020  110484 httplog.go:90] GET /healthz: (1.065296ms) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:26.245483  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.458717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.245732  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0813 20:07:26.255650  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.255680  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.255718  110484 httplog.go:90] GET /healthz: (1.012501ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.264641  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.520538ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.284950  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.866546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.285261  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0813 20:07:26.305044  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.445261ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.325982  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.890681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.326426  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0813 20:07:26.344257  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.344299  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.344334  110484 httplog.go:90] GET /healthz: (1.084896ms) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:26.345083  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.990126ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.356100  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.356149  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.356200  110484 httplog.go:90] GET /healthz: (1.334318ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.365870  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.791346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.366127  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0813 20:07:26.384496  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.384444ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.405084  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.009462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.405529  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0813 20:07:26.424762  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.202113ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.444495  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.444527  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.444570  110484 httplog.go:90] GET /healthz: (1.275199ms) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:26.445052  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.030977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.445297  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0813 20:07:26.455540  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.455570  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.455637  110484 httplog.go:90] GET /healthz: (975.517µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.464083  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.034591ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.484882  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.728284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.485164  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0813 20:07:26.505769  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.423726ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.525115  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.086867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.525391  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0813 20:07:26.544107  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.037996ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.544523  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.544704  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.544962  110484 httplog.go:90] GET /healthz: (1.759731ms) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:26.555609  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.555770  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.556178  110484 httplog.go:90] GET /healthz: (1.494859ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.564922  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.949357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.565123  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0813 20:07:26.584306  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.251568ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.605614  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.532356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.605856  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0813 20:07:26.624689  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.601251ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.644648  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.644678  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.644712  110484 httplog.go:90] GET /healthz: (1.149239ms) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:26.646281  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.618193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.646546  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0813 20:07:26.655493  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.655523  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.655552  110484 httplog.go:90] GET /healthz: (813.152µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.664570  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.213211ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.684858  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.862827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.685108  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0813 20:07:26.704460  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.437783ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.728434  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.273302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.728706  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0813 20:07:26.745036  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.745080  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.745043  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.029866ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.745109  110484 httplog.go:90] GET /healthz: (1.537275ms) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:26.755921  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.756819  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.757115  110484 httplog.go:90] GET /healthz: (2.3078ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.765208  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.05385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.765923  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0813 20:07:26.785043  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.904264ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.805792  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.234182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.806215  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0813 20:07:26.824744  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.693697ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.844829  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.778518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:26.845016  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0813 20:07:26.845087  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.845113  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.845144  110484 httplog.go:90] GET /healthz: (702.03µs) 0 [Go-http-client/1.1 127.0.0.1:51224]
I0813 20:07:26.855643  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.855675  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.855717  110484 httplog.go:90] GET /healthz: (1.146834ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.864389  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.283005ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.885401  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.357374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.885672  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0813 20:07:26.904275  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.12654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.925027  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.972444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.926106  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0813 20:07:26.944085  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.944113  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.944150  110484 httplog.go:90] GET /healthz: (884.685µs) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:26.944242  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.195451ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.955561  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:26.955627  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:26.955675  110484 httplog.go:90] GET /healthz: (986.121µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.965186  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.119154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:26.965401  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0813 20:07:26.984323  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.229627ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.005627  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.492062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.005899  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0813 20:07:27.024456  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.386223ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.045569  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:27.045979  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:27.046029  110484 httplog.go:90] GET /healthz: (1.699508ms) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:27.046449  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.262909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.046758  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0813 20:07:27.055413  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:27.055454  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:27.055512  110484 httplog.go:90] GET /healthz: (937.92µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.064539  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.478127ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.084802  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.710126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.085061  110484 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0813 20:07:27.104475  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.375139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.106768  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.580344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.125158  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.000548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.125418  110484 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0813 20:07:27.144427  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.302488ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.144543  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:27.144566  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:27.144633  110484 httplog.go:90] GET /healthz: (771.238µs) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:27.146449  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.369278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.156053  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:27.156086  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:27.156129  110484 httplog.go:90] GET /healthz: (1.408042ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.166033  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.005479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.166403  110484 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0813 20:07:27.184739  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.637382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.187253  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.860995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.205388  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.309582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.205919  110484 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0813 20:07:27.224549  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.168673ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.226471  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.283787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.245427  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:27.245465  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:27.245517  110484 httplog.go:90] GET /healthz: (2.273734ms) 0 [Go-http-client/1.1 127.0.0.1:51224]
I0813 20:07:27.245779  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.582136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.246035  110484 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0813 20:07:27.256068  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:27.256102  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:27.256142  110484 httplog.go:90] GET /healthz: (1.27067ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.269212  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (5.367473ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.281641  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.699461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.284850  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.801373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.285073  110484 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0813 20:07:27.304904  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.746354ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.307404  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.683041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.325805  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.604131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.326127  110484 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0813 20:07:27.344503  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:27.344541  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:27.344587  110484 httplog.go:90] GET /healthz: (1.294805ms) 0 [Go-http-client/1.1 127.0.0.1:51224]
I0813 20:07:27.345183  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.833901ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.347262  110484 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.558449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.355889  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:27.356097  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:27.356295  110484 httplog.go:90] GET /healthz: (1.561684ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.366046  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.003659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.366469  110484 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0813 20:07:27.384582  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.434112ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.386631  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.449498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.406067  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.841549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.406668  110484 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0813 20:07:27.424538  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.458864ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.426628  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.453802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.444411  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:27.444776  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:27.445009  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.834666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.445566  110484 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0813 20:07:27.445888  110484 httplog.go:90] GET /healthz: (2.592207ms) 0 [Go-http-client/1.1 127.0.0.1:51224]
I0813 20:07:27.456003  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:27.456231  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:27.456539  110484 httplog.go:90] GET /healthz: (1.839654ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.464977  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.892813ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.467290  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.583328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.486297  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.127866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.487046  110484 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0813 20:07:27.504357  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.289244ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.506947  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.75792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.525953  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.797686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.526488  110484 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0813 20:07:27.544646  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:27.544678  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:27.544736  110484 httplog.go:90] GET /healthz: (1.361488ms) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:27.545194  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.896454ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.547509  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.618149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.556222  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:27.556260  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:27.556311  110484 httplog.go:90] GET /healthz: (1.597269ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.565898  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.738101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.566384  110484 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0813 20:07:27.584606  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.486918ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.586884  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.746678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.606169  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.046737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.607938  110484 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0813 20:07:27.624820  110484 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.267173ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.627041  110484 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.728791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.644324  110484 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 20:07:27.644354  110484 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 20:07:27.644382  110484 httplog.go:90] GET /healthz: (1.101637ms) 0 [Go-http-client/1.1 127.0.0.1:51222]
I0813 20:07:27.646243  110484 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.242488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.646498  110484 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0813 20:07:27.656330  110484 httplog.go:90] GET /healthz: (1.596316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.659018  110484 httplog.go:90] GET /api/v1/namespaces/default: (1.317269ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.661517  110484 httplog.go:90] POST /api/v1/namespaces: (2.027826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.663791  110484 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.692436ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.667719  110484 httplog.go:90] POST /api/v1/namespaces/default/services: (3.610277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.669236  110484 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.081432ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.671590  110484 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.844942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.744782  110484 httplog.go:90] GET /healthz: (1.277019ms) 200 [Go-http-client/1.1 127.0.0.1:51224]
W0813 20:07:27.746089  110484 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 20:07:27.746119  110484 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 20:07:27.746140  110484 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 20:07:27.746150  110484 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 20:07:27.746190  110484 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 20:07:27.746346  110484 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 20:07:27.746432  110484 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 20:07:27.746550  110484 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 20:07:27.746649  110484 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 20:07:27.746773  110484 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 20:07:27.746868  110484 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0813 20:07:27.746977  110484 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0813 20:07:27.747061  110484 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0813 20:07:27.747962  110484 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.748095  110484 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.748235  110484 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.748256  110484 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.748271  110484 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.748287  110484 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.748397  110484 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.748415  110484 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.747962  110484 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.748788  110484 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.748058  110484 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.749479  110484 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.748833  110484 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.749763  110484 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.750204  110484 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (632.584µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0813 20:07:27.750370  110484 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (1.704278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:07:27.750388  110484 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (490.37µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51292]
I0813 20:07:27.750406  110484 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (504.845µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51298]
I0813 20:07:27.750732  110484 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (2.104345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:07:27.749131  110484 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.750804  110484 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.749276  110484 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.750930  110484 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.751271  110484 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=29055 labels= fields= timeout=7m16s
I0813 20:07:27.751370  110484 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (2.136231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51294]
I0813 20:07:27.751373  110484 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (388.831µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0813 20:07:27.751556  110484 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=29055 labels= fields= timeout=9m0s
I0813 20:07:27.751803  110484 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=29055 labels= fields= timeout=8m27s
I0813 20:07:27.751853  110484 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=29055 labels= fields= timeout=9m28s
I0813 20:07:27.751877  110484 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (368.039µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51292]
I0813 20:07:27.752003  110484 get.go:250] Starting watch for /api/v1/services, rv=29303 labels= fields= timeout=9m24s
I0813 20:07:27.752346  110484 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (378.72µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51306]
I0813 20:07:27.752655  110484 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=29055 labels= fields= timeout=8m10s
I0813 20:07:27.752770  110484 get.go:250] Starting watch for /api/v1/pods, rv=29055 labels= fields= timeout=9m29s
I0813 20:07:27.752804  110484 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=29055 labels= fields= timeout=6m3s
I0813 20:07:27.752918  110484 get.go:250] Starting watch for /api/v1/nodes, rv=29055 labels= fields= timeout=9m4s
I0813 20:07:27.753302  110484 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.753317  110484 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.753875  110484 reflector.go:122] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.753895  110484 reflector.go:160] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0813 20:07:27.754304  110484 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (483.791µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51308]
I0813 20:07:27.754730  110484 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (498.897µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51310]
I0813 20:07:27.755782  110484 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=29055 labels= fields= timeout=8m15s
I0813 20:07:27.756156  110484 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=29055 labels= fields= timeout=7m12s
I0813 20:07:27.847770  110484 shared_informer.go:211] caches populated
I0813 20:07:27.947918  110484 shared_informer.go:211] caches populated
I0813 20:07:28.048176  110484 shared_informer.go:211] caches populated
I0813 20:07:28.148373  110484 shared_informer.go:211] caches populated
I0813 20:07:28.248589  110484 shared_informer.go:211] caches populated
I0813 20:07:28.348820  110484 shared_informer.go:211] caches populated
I0813 20:07:28.449044  110484 shared_informer.go:211] caches populated
I0813 20:07:28.549261  110484 shared_informer.go:211] caches populated
I0813 20:07:28.649553  110484 shared_informer.go:211] caches populated
I0813 20:07:28.749893  110484 shared_informer.go:211] caches populated
I0813 20:07:28.751039  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:28.751215  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:28.752298  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:28.752527  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:28.753568  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:28.755064  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:28.755342  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:28.850304  110484 shared_informer.go:211] caches populated
I0813 20:07:28.950498  110484 shared_informer.go:211] caches populated
I0813 20:07:28.954108  110484 httplog.go:90] POST /api/v1/nodes: (2.831371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:28.954191  110484 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0813 20:07:28.957224  110484 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods: (2.399777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:28.957631  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/waiting-pod
I0813 20:07:28.957656  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/waiting-pod
I0813 20:07:28.957810  110484 scheduler_binder.go:256] AssumePodVolumes for pod "preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/waiting-pod", node "test-node-0"
I0813 20:07:28.957826  110484 scheduler_binder.go:266] AssumePodVolumes for pod "preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/waiting-pod", node "test-node-0": all PVCs bound and nothing to do
I0813 20:07:28.957875  110484 framework.go:558] waiting for 30s for pod "waiting-pod" at permit
I0813 20:07:28.959633  110484 factory.go:615] Attempting to bind signalling-pod to test-node-1
I0813 20:07:28.960076  110484 factory.go:615] Attempting to bind waiting-pod to test-node-0
I0813 20:07:28.962172  110484 scheduler.go:447] Failed to bind pod: permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/signalling-pod
E0813 20:07:28.962189  110484 scheduler.go:449] scheduler cache ForgetPod failed: pod 7eb3a3bf-d27f-459b-9dec-085b312681f3 wasn't assumed so cannot be forgotten
E0813 20:07:28.962209  110484 scheduler.go:605] error binding pod: Post http://127.0.0.1:35861/api/v1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/pods/signalling-pod/binding: dial tcp 127.0.0.1:35861: connect: connection refused
E0813 20:07:28.962238  110484 factory.go:566] Error scheduling permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/signalling-pod: Post http://127.0.0.1:35861/api/v1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/pods/signalling-pod/binding: dial tcp 127.0.0.1:35861: connect: connection refused; retrying
I0813 20:07:28.962275  110484 factory.go:624] Updating pod condition for permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0813 20:07:28.962718  110484 factory.go:599] Error getting pod permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/signalling-pod for retry: Get http://127.0.0.1:35861/api/v1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/pods/signalling-pod: dial tcp 127.0.0.1:35861: connect: connection refused; retrying...
E0813 20:07:28.962728  110484 scheduler.go:280] Error updating the condition of the pod permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/signalling-pod: Put http://127.0.0.1:35861/api/v1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/pods/signalling-pod/status: dial tcp 127.0.0.1:35861: connect: connection refused
E0813 20:07:28.962965  110484 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35861/apis/events.k8s.io/v1beta1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/events: dial tcp 127.0.0.1:35861: connect: connection refused' (may retry after sleeping)
I0813 20:07:28.969469  110484 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/waiting-pod/binding: (7.322537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:28.969845  110484 scheduler.go:614] pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/waiting-pod is bound successfully on node "test-node-0", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>.".
I0813 20:07:28.972279  110484 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/events: (2.083656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
E0813 20:07:29.163387  110484 factory.go:599] Error getting pod permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/signalling-pod for retry: Get http://127.0.0.1:35861/api/v1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/pods/signalling-pod: dial tcp 127.0.0.1:35861: connect: connection refused; retrying...
E0813 20:07:29.563993  110484 factory.go:599] Error getting pod permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/signalling-pod for retry: Get http://127.0.0.1:35861/api/v1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/pods/signalling-pod: dial tcp 127.0.0.1:35861: connect: connection refused; retrying...
I0813 20:07:29.751391  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:29.751520  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:29.752622  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:29.752653  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:29.753762  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:29.755211  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:29.755510  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 20:07:30.364619  110484 factory.go:599] Error getting pod permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/signalling-pod for retry: Get http://127.0.0.1:35861/api/v1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/pods/signalling-pod: dial tcp 127.0.0.1:35861: connect: connection refused; retrying...
I0813 20:07:30.751630  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:30.751737  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:30.752763  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:30.752823  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:30.753958  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:30.755356  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:30.755678  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:31.751784  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:31.751872  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:31.752960  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:31.752971  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:31.754226  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:31.755530  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:31.756291  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 20:07:31.965225  110484 factory.go:599] Error getting pod permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/signalling-pod for retry: Get http://127.0.0.1:35861/api/v1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/pods/signalling-pod: dial tcp 127.0.0.1:35861: connect: connection refused; retrying...
I0813 20:07:32.751949  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:32.752066  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:32.753084  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:32.753126  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:32.754375  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:32.755720  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:32.756497  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:33.752168  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:33.752308  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:33.753241  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:33.753258  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:33.754628  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:33.755886  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:33.756654  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 20:07:34.451876  110484 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39477/apis/events.k8s.io/v1beta1/namespaces/permit-pluginadaef044-33e8-4f36-8746-79dae8b9a84d/events: dial tcp 127.0.0.1:39477: connect: connection refused' (may retry after sleeping)
I0813 20:07:34.752388  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:34.752505  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:34.753545  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:34.753610  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:34.754796  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:34.756319  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:34.756869  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 20:07:35.165837  110484 factory.go:599] Error getting pod permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/signalling-pod for retry: Get http://127.0.0.1:35861/api/v1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/pods/signalling-pod: dial tcp 127.0.0.1:35861: connect: connection refused; retrying...
I0813 20:07:35.752576  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:35.752661  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:35.753729  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:35.753759  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:35.754986  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:35.756484  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:35.757022  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:36.752780  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:36.753074  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:36.753889  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:36.753922  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:36.755136  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:36.756665  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:36.757169  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 20:07:37.179261  110484 factory.go:599] Error getting pod permit-pluginadaef044-33e8-4f36-8746-79dae8b9a84d/test-pod for retry: Get http://127.0.0.1:39477/api/v1/namespaces/permit-pluginadaef044-33e8-4f36-8746-79dae8b9a84d/pods/test-pod: dial tcp 127.0.0.1:39477: connect: connection refused; retrying...
I0813 20:07:37.659690  110484 httplog.go:90] GET /api/v1/namespaces/default: (1.99883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:37.661958  110484 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.825333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:37.663892  110484 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.439707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:37.752994  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:37.753187  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:37.754039  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:37.754096  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:37.755318  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:37.756863  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:37.757313  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:38.753185  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:38.753282  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:38.754987  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:38.755039  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:38.755440  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:38.757057  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:38.757482  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:39.753438  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:39.753482  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:39.755717  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:39.755799  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:39.756463  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:39.757315  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:39.757646  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:40.753701  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:40.753738  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:40.755848  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:40.755888  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:40.756646  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:40.757485  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:40.757799  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 20:07:41.260995  110484 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35861/apis/events.k8s.io/v1beta1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/events: dial tcp 127.0.0.1:35861: connect: connection refused' (may retry after sleeping)
E0813 20:07:41.566553  110484 factory.go:599] Error getting pod permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/signalling-pod for retry: Get http://127.0.0.1:35861/api/v1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/pods/signalling-pod: dial tcp 127.0.0.1:35861: connect: connection refused; retrying...
I0813 20:07:41.753864  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:41.753867  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:41.756312  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:41.756324  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:41.756801  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:41.758306  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:41.758328  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:42.754116  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:42.754202  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:42.756449  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:42.756486  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:42.757116  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:42.758463  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:42.758678  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:43.754307  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:43.754419  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:43.756714  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:43.756735  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:43.757273  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:43.758642  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:43.758821  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:44.754466  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:44.754575  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:44.756963  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:44.757038  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:44.757441  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:44.758796  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:44.759009  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:45.754675  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:45.754792  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:45.757328  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:45.757328  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:45.757681  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:45.759003  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:45.759207  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 20:07:46.325569  110484 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39477/apis/events.k8s.io/v1beta1/namespaces/permit-pluginadaef044-33e8-4f36-8746-79dae8b9a84d/events: dial tcp 127.0.0.1:39477: connect: connection refused' (may retry after sleeping)
I0813 20:07:46.754881  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:46.754985  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:46.757532  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:46.757579  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:46.757871  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:46.759164  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:46.759308  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:47.659245  110484 httplog.go:90] GET /api/v1/namespaces/default: (1.434105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:47.661375  110484 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.531632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:47.663110  110484 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.399866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:47.755071  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:47.755104  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:47.757660  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:47.757708  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:47.758043  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:47.759302  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:47.759493  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:48.755218  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:48.755364  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:48.757851  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:48.757851  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:48.758205  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:48.759469  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:48.759648  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:49.755415  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:49.755472  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:49.759776  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:49.759811  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:49.760523  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:49.760547  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:49.760567  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:50.755575  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:50.755644  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:50.759933  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:50.759971  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:50.760809  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:50.760977  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:50.760992  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:51.755866  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:51.756458  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:51.760195  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:51.760241  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:51.760945  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:51.761145  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:51.761161  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:52.757806  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:52.758146  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:52.760864  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:52.760890  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:52.761833  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:52.761960  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:52.761987  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 20:07:52.974290  110484 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35861/apis/events.k8s.io/v1beta1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/events: dial tcp 127.0.0.1:35861: connect: connection refused' (may retry after sleeping)
I0813 20:07:53.758053  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:53.758366  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:53.761018  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:53.761092  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:53.761985  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:53.762097  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:53.762129  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 20:07:54.367302  110484 factory.go:599] Error getting pod permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/signalling-pod for retry: Get http://127.0.0.1:35861/api/v1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/pods/signalling-pod: dial tcp 127.0.0.1:35861: connect: connection refused; retrying...
I0813 20:07:54.758392  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:54.758563  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:54.761173  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:54.761736  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:54.762242  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:54.762253  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:54.762287  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:55.758589  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:55.758724  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:55.761721  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:55.761998  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:55.762403  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:55.762404  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:55.762448  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:56.759225  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:56.759275  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:56.761940  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:56.762156  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:56.762563  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:56.762567  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:56.762572  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:57.661327  110484 httplog.go:90] GET /api/v1/namespaces/default: (3.230782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:57.663723  110484 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.734627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:57.666056  110484 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.882641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:57.759433  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:57.759487  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:57.762120  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:57.762399  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:57.762716  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:57.762727  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:57.762753  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 20:07:57.885944  110484 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39477/apis/events.k8s.io/v1beta1/namespaces/permit-pluginadaef044-33e8-4f36-8746-79dae8b9a84d/events: dial tcp 127.0.0.1:39477: connect: connection refused' (may retry after sleeping)
I0813 20:07:58.759649  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:58.759911  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:58.762418  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:58.762562  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:58.762903  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:58.762930  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:58.762930  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:58.960438  110484 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods: (2.340717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:58.961216  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:07:58.961242  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:07:58.961361  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:07:58.961397  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:07:58.963375  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.684647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:58.963969  110484 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod/status: (1.984742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55612]
I0813 20:07:58.964492  110484 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/events: (2.195718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:07:58.965584  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.220382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55612]
I0813 20:07:58.965916  110484 generic_scheduler.go:1193] Node test-node-0 is a potential node for preemption.
I0813 20:07:58.968765  110484 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod/status: (2.333691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:07:58.971418  110484 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/waiting-pod: (2.238968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:07:58.974485  110484 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/events: (2.427827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:07:59.062863  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.625728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:07:59.163334  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.765127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:07:59.263888  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.616184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:07:59.362908  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.640359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:07:59.463080  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.750962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:07:59.563164  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.923322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:07:59.663074  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.817823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:07:59.759849  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:59.760073  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:59.762911  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:59.762953  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:59.763107  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:59.763081  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:59.763084  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.921054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:07:59.763102  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:07:59.763355  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:07:59.763637  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:07:59.763778  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:07:59.763816  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:07:59.765792  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.459785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:07:59.766689  110484 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/events: (2.295179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:59.766800  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.969803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55620]
I0813 20:07:59.863495  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.200714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:07:59.963069  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.764898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:00.063158  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.848891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:00.163026  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.745402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:00.263153  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.880329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:00.363252  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.935279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:00.463043  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.626527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:00.563228  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.900303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:00.663261  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.938454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:00.751401  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:00.751444  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:00.751673  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:00.751744  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:00.754074  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.994596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:00.754177  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.086709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:00.755774  110484 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/events/preemptor-pod.15ba944d339a308c: (3.180209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55630]
I0813 20:08:00.760317  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:00.760317  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:00.762999  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.825029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:00.763238  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:00.763394  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:00.763241  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:00.763262  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:00.763520  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:00.763537  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:00.763667  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:00.763694  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:00.763699  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:00.765835  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.848142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:00.765854  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.755344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:00.863259  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.94686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:00.963139  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.76378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:01.063280  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.999464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:01.163136  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.742285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:01.262908  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.725825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:01.363319  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.995266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:01.463242  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.885631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:01.563179  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.914534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:01.663233  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.856263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:01.760538  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:01.760644  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:01.763185  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.001326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:01.763488  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:01.763535  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:01.763663  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:01.763685  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:01.763904  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:01.764061  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:01.764074  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:01.764213  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:01.764253  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:01.766180  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.511588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:01.766184  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.715204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:01.863073  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.752877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:01.963642  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.186476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:02.063140  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.840303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:02.163199  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.899394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:02.263331  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.002772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:02.363510  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.151769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:02.463419  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.014259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:02.563271  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.974886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:02.663259  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.917489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:02.760771  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:02.761081  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:02.762793  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.54686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:02.763647  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:02.763682  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:02.763798  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:02.763864  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:02.763993  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:02.764104  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:02.764044  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:02.764400  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:02.764706  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:02.766692  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.497659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:02.766754  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.45327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
E0813 20:08:02.779840  110484 factory.go:599] Error getting pod permit-pluginadaef044-33e8-4f36-8746-79dae8b9a84d/test-pod for retry: Get http://127.0.0.1:39477/api/v1/namespaces/permit-pluginadaef044-33e8-4f36-8746-79dae8b9a84d/pods/test-pod: dial tcp 127.0.0.1:39477: connect: connection refused; retrying...
I0813 20:08:02.863128  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.877013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:02.963265  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.980888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:03.062812  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.546845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:03.163862  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.810802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:03.262862  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.590767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:03.362964  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.71895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:03.463164  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.445126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:03.564644  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (3.436972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:03.663098  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.799372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:03.760909  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:03.761346  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:03.763788  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:03.763878  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:03.763893  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.636441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:03.763913  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:03.763922  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:03.763964  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:03.764007  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:03.764030  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:03.764061  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:03.764308  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:03.765488  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.136044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:03.765744  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.485917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:03.863137  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.904813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:03.963668  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.455274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:04.063448  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.220549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:04.162678  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.444732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:04.263036  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.723806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:04.362851  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.653906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:04.463028  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.82625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:04.563021  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.860605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
E0813 20:08:04.565504  110484 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35861/apis/events.k8s.io/v1beta1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/events: dial tcp 127.0.0.1:35861: connect: connection refused' (may retry after sleeping)
I0813 20:08:04.662661  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.412321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:04.761789  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:04.761833  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:04.762924  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.750351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:04.763958  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:04.763993  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:04.764131  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:04.764149  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:04.764161  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:04.764173  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:04.764317  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:04.764365  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:04.764470  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:04.766315  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.559682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:04.766316  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.628881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:04.863137  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.86716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:04.962880  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.688403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:05.062997  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.545596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:05.163140  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.668225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:05.263419  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.966435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:05.362867  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.595796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:05.462960  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.710143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:05.565974  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (4.785151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:05.663162  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.981942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:05.761978  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:05.761982  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:05.762537  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.360545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:05.764119  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:05.764146  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:05.764228  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:05.764235  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:05.764291  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:05.764377  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:05.764456  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:05.764498  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:05.764726  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:05.766168  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.338039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:05.766312  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.223416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:05.863229  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.045287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:05.962493  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.303689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:06.063639  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.12001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:06.163171  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.524868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:06.262976  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.736012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:06.362791  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.496236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:06.462871  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.682821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:06.562905  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.703292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:06.662868  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.704925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:06.762414  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:06.762470  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:06.762923  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.508224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:06.764262  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:06.764358  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:06.764394  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:06.764404  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:06.764497  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:06.764545  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:06.764575  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:06.765323  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:06.765345  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:06.766300  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.469405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:06.766301  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.364619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:06.864201  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.679021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:06.963176  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.954051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:07.063014  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.748014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:07.163263  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.016354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:07.263395  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.146851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:07.362898  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.64632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:07.465906  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (4.606728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:07.562834  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.609912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:07.659662  110484 httplog.go:90] GET /api/v1/namespaces/default: (1.656442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:07.661481  110484 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.37365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:07.662716  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.617924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:07.663856  110484 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.757221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:07.752391  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:07.752443  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:07.752630  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:07.752683  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:07.755199  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.165547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:07.755199  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.022341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:07.762698  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:07.762926  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.768117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:07.763074  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:07.764492  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:07.764513  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:07.764711  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:07.764736  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:07.764804  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:07.764864  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:07.764901  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:07.765605  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:07.765662  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:07.766524  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.409644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:07.766525  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.310579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:07.862802  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.569782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:07.964274  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (3.035744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:08.062762  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.445307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:08.163145  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.823724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:08.263500  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.18809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:08.363253  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.934622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:08.463298  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.990782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:08.563294  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.939851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:08.663375  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.960588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
E0813 20:08:08.706118  110484 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39477/apis/events.k8s.io/v1beta1/namespaces/permit-pluginadaef044-33e8-4f36-8746-79dae8b9a84d/events: dial tcp 127.0.0.1:39477: connect: connection refused' (may retry after sleeping)
I0813 20:08:08.763560  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.395325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:08.763682  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:08.764393  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:08.764662  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:08.765765  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:08.765005  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:08.765252  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:08.765863  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:08.765876  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:08.765920  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:08.766044  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:08.766087  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:08.769667  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.484891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:08.770142  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.127382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:08.863146  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.918205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:08.963078  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.557398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:09.063246  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.94576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:09.163326  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.031628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:09.263411  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.073557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:09.364081  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.696694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:09.463206  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.924287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:09.563983  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.456225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:09.663238  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.914652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:09.762812  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.585861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:09.763889  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:09.765792  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:09.765931  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:09.765993  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:09.766002  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:09.766114  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:09.766116  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:09.766141  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:09.766160  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:09.766262  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:09.766404  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:09.768466  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.649585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:09.768763  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.140683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:09.863407  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.097886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:09.962894  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.596225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:10.063872  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.650396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:10.162969  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.671992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:10.263323  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.057856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:10.363251  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.065809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:10.463465  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.149966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:10.562972  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.706502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:10.663284  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.996184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:10.764236  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:10.766056  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:10.766097  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:10.766309  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:10.766330  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:10.766480  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:10.766554  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:10.767160  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:10.767188  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:10.767201  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:10.767213  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:10.768351  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (7.115885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:10.769497  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.997308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:10.770021  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.620566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57144]
I0813 20:08:10.863362  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.053025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:10.963067  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.857772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:11.063214  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.950014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:11.164178  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.899103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:11.263102  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.786999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:11.363043  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.724839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:11.463084  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.852183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:11.563195  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.957022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:11.663052  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.779198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:11.762860  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.666385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:11.764448  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:11.766218  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:11.766244  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:11.766365  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:11.766380  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:11.766546  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:11.766619  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:11.767310  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:11.767384  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:11.767403  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:11.767404  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:11.768484  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.580932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:11.768570  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.411302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:11.862780  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.490609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:11.963839  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.013365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:12.063137  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.935165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:12.162818  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.5634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:12.263295  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.032136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:12.363170  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.953629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:12.463252  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.976017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:12.562753  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.518997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:12.663221  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.043292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:12.764009  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.657829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:12.764666  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:12.766363  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:12.766482  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:12.766859  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:12.766886  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:12.767076  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:12.767127  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:12.767406  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:12.767535  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:12.767545  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:12.767553  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:12.769099  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.723712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:12.769108  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.71428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:12.863361  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.081305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:12.963278  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.033418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:13.062879  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.637673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:13.163111  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.750982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:13.263297  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.960715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:13.362928  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.688076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:13.462862  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.646041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:13.563218  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.956558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:13.663702  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.360026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:13.763448  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.259783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:13.764923  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:13.766631  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:13.766784  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:13.766798  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:13.766950  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:13.766989  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:13.767309  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:13.767649  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:13.767674  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:13.767732  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:13.767747  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:13.772677  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (3.620303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:13.773050  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (3.452283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:13.863128  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.873431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:13.963065  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.826186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:14.063211  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.950518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:14.168453  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (6.90489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:14.263054  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.781658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:14.366354  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (5.01344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:14.462851  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.6268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:14.563107  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.908808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:14.663187  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.882648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:14.762904  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.726468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:14.765096  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:14.766826  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:14.767000  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:14.767022  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:14.767141  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:14.767190  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:14.767484  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:14.767787  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:14.767903  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:14.767925  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:14.767934  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:14.769889  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.117359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:14.769905  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.827182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:14.863498  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.178561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:14.963207  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.957257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:15.063196  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.905765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:15.163019  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.784595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:15.262944  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.719236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:15.363376  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.124748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:15.463081  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.829175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:15.562986  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.715883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:15.663529  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.227757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:15.757082  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:15.757116  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:15.757306  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:15.757352  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:15.760012  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.251725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:15.760077  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.359699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:15.762767  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.608575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:15.765271  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:15.767044  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:15.767270  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:15.767345  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:15.767498  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:15.767677  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:15.767977  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:15.768052  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:15.768051  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:15.768067  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:15.767538  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:15.771032  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.68533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:15.772114  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.580724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:15.863031  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.772913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
E0813 20:08:15.881976  110484 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35861/apis/events.k8s.io/v1beta1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/events: dial tcp 127.0.0.1:35861: connect: connection refused' (may retry after sleeping)
I0813 20:08:15.963306  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.883446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:16.062671  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.430356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:16.163277  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.940236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:16.264054  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.698434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:16.363242  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.998116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:16.462960  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.723915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:16.562905  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.620228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:16.663242  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.005434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:16.764778  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (3.610006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:16.765462  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:16.767236  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:16.767383  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:16.767403  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:16.767613  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:16.767672  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:16.768126  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:16.768188  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:16.768207  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:16.768402  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:16.768448  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:16.769368  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.338661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:16.769568  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.597543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:16.863400  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.103728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:16.963171  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.887183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:17.062956  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.734745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:17.162985  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.665985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:17.262999  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.715197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:17.363112  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.854459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:17.463219  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.9502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:17.563937  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.032753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:17.660038  110484 httplog.go:90] GET /api/v1/namespaces/default: (1.828811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:17.662781  110484 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.234534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:17.663666  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.239075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:17.665294  110484 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.496326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:17.762861  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.675587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:17.765643  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:17.767411  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:17.767533  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:17.767543  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:17.767707  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:17.767742  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:17.768382  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:17.768411  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:17.768508  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:17.768545  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:17.768558  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:17.769573  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.428816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:17.769782  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.526395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:17.865269  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (3.833977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:17.962893  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.596817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:18.063382  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.099123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:18.163308  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.950681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:18.263350  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.090764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:18.363287  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.994873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:18.463022  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.739428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:18.563259  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.00768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:18.663257  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.985418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:18.764109  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.959041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:18.765839  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:18.767609  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:18.767785  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:18.767800  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:18.768012  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:18.768054  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:18.768510  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:18.768652  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:18.768708  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:18.768720  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:18.768729  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:18.770118  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.678277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:18.770128  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.818713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:18.862856  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.652162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:18.963033  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.811014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:19.062853  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.560503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:19.163141  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.722249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:19.263457  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.114895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:19.363080  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.882668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
E0813 20:08:19.388034  110484 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:39477/apis/events.k8s.io/v1beta1/namespaces/permit-pluginadaef044-33e8-4f36-8746-79dae8b9a84d/events: dial tcp 127.0.0.1:39477: connect: connection refused' (may retry after sleeping)
I0813 20:08:19.462999  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.762369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:19.563129  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.893209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:19.662804  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.569647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:19.762871  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.684374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:19.766029  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:19.767798  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:19.767985  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:19.768007  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:19.768152  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:19.768209  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:19.768690  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:19.768793  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:19.768807  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:19.768817  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:19.768823  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:19.770193  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.673657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:19.770201  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.684121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:19.863535  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.164662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:19.963133  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.82122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
E0813 20:08:19.967841  110484 factory.go:599] Error getting pod permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/signalling-pod for retry: Get http://127.0.0.1:35861/api/v1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/pods/signalling-pod: dial tcp 127.0.0.1:35861: connect: connection refused; retrying...
I0813 20:08:20.063402  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.036704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:20.163034  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.770214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:20.263659  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.296405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:20.363336  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.984946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:20.463846  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.52189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:20.563404  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.134149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:20.663221  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.968928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:20.762729  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.55453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:20.766202  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:20.767998  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:20.768190  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:20.768211  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:20.768417  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:20.768466  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:20.769005  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:20.769343  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:20.769363  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:20.769383  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:20.769471  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:20.770459  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.338725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:20.770514  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.736883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:20.862804  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.54485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:20.963062  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.817787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:21.064437  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (3.221267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:21.162733  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.522226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:21.263054  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.778704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:21.362957  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.713564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:21.462875  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.628809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:21.563007  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.790191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:21.663427  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.163826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:21.763359  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.091074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:21.766302  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:21.768211  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:21.768413  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:21.768518  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:21.768812  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:21.768870  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:21.769704  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:21.769742  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:21.769742  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:21.769758  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:21.769771  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:21.770850  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.674467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:21.771056  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.879383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:21.863424  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.030238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:21.963209  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.923356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:22.063065  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.782049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:22.162875  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.608024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:22.263172  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.90404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:22.363237  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.90673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:22.463530  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.227479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:22.563700  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.159092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:22.662948  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.661828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:22.763470  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.962558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:22.766655  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:22.768405  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:22.768543  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:22.768553  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:22.768725  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:22.768765  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:22.770171  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:22.770404  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:22.770420  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:22.770409  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:22.770442  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:22.771727  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.778641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:22.772846  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (3.389429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:22.862925  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.593774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:22.963367  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.992866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:23.063207  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.915694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:23.163430  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.901506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:23.263080  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.84469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:23.363344  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.027329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:23.462987  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.700004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:23.563188  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.946673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:23.662998  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.769022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:23.762768  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.598791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:23.766821  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:23.768620  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:23.768807  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:23.768823  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:23.768954  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:23.768998  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:23.770307  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:23.770587  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:23.770619  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:23.770638  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:23.770650  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:23.771287  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.488091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:23.771580  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.270516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:23.863220  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.019827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:23.963255  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.961948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:24.063320  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.066909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:24.162942  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.677864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:24.263696  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.515752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:24.363402  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.736334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:24.366771  110484 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.615905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:24.368205  110484 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.134047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:24.370783  110484 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (2.119502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:24.463173  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.927557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:24.563479  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.153486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:24.662889  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.507836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:24.762714  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.597112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:24.767001  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:24.768826  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:24.769002  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:24.769023  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:24.769168  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:24.769225  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:24.770548  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:24.770751  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:24.770781  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:24.771004  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:24.771028  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:24.771112  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.596149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:24.771857  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.190932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:24.863126  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.883583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:24.963235  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.019093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:25.063224  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.965552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:25.163819  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.488252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:25.263087  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.795348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:25.363007  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.663529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:25.463259  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.848293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:25.563241  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.906388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:25.663497  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.14088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:25.763144  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.875584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:25.767193  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:25.769041  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:25.769210  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:25.769224  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:25.769438  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:25.769501  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:25.770780  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:25.770875  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:25.770894  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:25.771097  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:25.771105  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:25.771927  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.07695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:25.771947  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.089802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:25.862956  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.71338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:25.962985  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.697899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:26.063533  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.976737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
E0813 20:08:26.121491  110484 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35861/apis/events.k8s.io/v1beta1/namespaces/permit-pluginb550c359-003c-43a1-a710-5ff05cd4a097/events: dial tcp 127.0.0.1:35861: connect: connection refused' (may retry after sleeping)
I0813 20:08:26.163315  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.996154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:26.263447  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.182307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:26.363096  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.831496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:26.463149  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.960416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:26.562975  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.74283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:26.663520  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.146528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:26.763658  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.440302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:26.767428  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:26.769300  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:26.769498  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:26.769529  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:26.769729  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:26.769790  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:26.771075  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:26.771213  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:26.771225  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:26.771252  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:26.771334  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:26.772515  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.37827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:26.773219  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.957716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:26.863677  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.346303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:26.964368  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (3.024557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:27.063949  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.47328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:27.163973  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.250035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:27.264032  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.628164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:27.363771  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.395441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:27.463849  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.462727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:27.563763  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.328138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:27.660398  110484 httplog.go:90] GET /api/v1/namespaces/default: (1.989131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:27.663123  110484 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.96937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:27.663395  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.241192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:27.665950  110484 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.063901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:27.763345  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.031323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:27.767708  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:27.769542  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:27.769737  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:27.769758  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:27.770006  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:27.770086  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:27.771641  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:27.771646  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:27.771807  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:27.771827  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:27.771862  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:27.772798  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.890561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:27.773122  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.198252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:27.863428  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.979095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:27.963061  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.786029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:28.064819  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (3.50968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:28.163880  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.467631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:28.264085  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.520132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:28.363552  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.260526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:28.463572  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.103517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:28.563796  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.441086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:28.664325  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.937922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:28.763642  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.278642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:28.767965  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:28.769813  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:28.770033  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:28.770061  110484 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:28.770288  110484 factory.go:550] Unable to schedule preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 20:08:28.770357  110484 factory.go:624] Updating pod condition for preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 20:08:28.771813  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:28.771898  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:28.772005  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:28.772028  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:28.772133  110484 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 20:08:28.772695  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.914572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:28.773235  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.347626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:28.863546  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.292871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:28.962717  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.505474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:28.965984  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (2.850428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:28.969046  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/waiting-pod: (2.359148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:28.981317  110484 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/waiting-pod: (11.721669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:28.987060  110484 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:28.987101  110484 scheduler.go:473] Skip schedule deleting pod: preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/preemptor-pod
I0813 20:08:28.989472  110484 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/events: (1.836042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51612]
I0813 20:08:28.996951  110484 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (13.750883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:29.001493  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/waiting-pod: (2.647292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:29.004784  110484 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin616e1d85-4d97-4141-b9c3-86f3ded3b4cc/pods/preemptor-pod: (1.29581ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:29.005553  110484 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=29055&timeout=6m3s&timeoutSeconds=363&watch=true: (1m1.253273983s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51304]
I0813 20:08:29.005820  110484 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=29055&timeout=9m4s&timeoutSeconds=544&watch=true: (1m1.253162942s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51306]
I0813 20:08:29.005874  110484 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=29055&timeout=9m29s&timeoutSeconds=569&watch=true: (1m1.253638348s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51296]
I0813 20:08:29.005986  110484 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=29055&timeout=8m10s&timeoutSeconds=490&watch=true: (1m1.253561937s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51292]
I0813 20:08:29.006092  110484 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=29303&timeout=9m24s&timeoutSeconds=564&watch=true: (1m1.254362926s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51224]
I0813 20:08:29.006120  110484 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=29055&timeout=9m28s&timeoutSeconds=568&watch=true: (1m1.254578945s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51298]
I0813 20:08:29.006247  110484 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=29055&timeout=8m27s&timeoutSeconds=507&watch=true: (1m1.254742466s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51222]
I0813 20:08:29.006255  110484 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=29055&timeout=7m16s&timeoutSeconds=436&watch=true: (1m1.255324439s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51300]
I0813 20:08:29.006322  110484 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=29055&timeout=9m0s&timeoutSeconds=540&watch=true: (1m1.255040001s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51302]
E0813 20:08:29.006445  110484 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0813 20:08:29.006611  110484 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=29055&timeout=8m15s&timeoutSeconds=495&watch=true: (1m1.251180038s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51310]
I0813 20:08:29.006626  110484 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=29055&timeout=7m12s&timeoutSeconds=432&watch=true: (1m1.251179336s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51308]
I0813 20:08:29.010846  110484 httplog.go:90] DELETE /api/v1/nodes: (5.599305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:29.011094  110484 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0813 20:08:29.013197  110484 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.800002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
I0813 20:08:29.022135  110484 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (8.277271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55614]
--- FAIL: TestPreemptWithPermitPlugin (64.83s)
    framework_test.go:1618: Expected the preemptor pod to be scheduled. error: timed out waiting for the condition
    framework_test.go:1622: Expected the waiting pod to get preempted and deleted

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190813-200023.xml

Find permit-pluginadaef044-33e8-4f36-8746-79dae8b9a84d/test-pod mentions in log files | View test history on testgrid


Show 2470 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 683 lines ...
W0813 19:55:07.382] I0813 19:55:07.380465   53082 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...
W0813 19:55:07.394] I0813 19:55:07.393974   53082 leaderelection.go:251] successfully acquired lease kube-system/kube-controller-manager
W0813 19:55:07.395] I0813 19:55:07.394618   53082 event.go:255] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"fac45649-2c28-4e9c-9a20-de8e832a18ab", APIVersion:"v1", ResourceVersion:"150", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 1daf42f49d22_713c3c2c-ee7e-4df2-af09-1e12439df374 became leader
I0813 19:55:07.495] +++ [0813 19:55:07] On try 2, controller-manager: ok
W0813 19:55:07.596] I0813 19:55:07.549761   53082 plugins.go:100] No cloud provider specified.
W0813 19:55:07.597] W0813 19:55:07.549826   53082 controllermanager.go:555] "serviceaccount-token" is disabled because there is no private key
W0813 19:55:07.597] E0813 19:55:07.550529   53082 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0813 19:55:07.598] W0813 19:55:07.550558   53082 controllermanager.go:527] Skipping "service"
W0813 19:55:07.598] I0813 19:55:07.550973   53082 controllermanager.go:535] Started "clusterrole-aggregation"
W0813 19:55:07.598] I0813 19:55:07.551004   53082 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0813 19:55:07.598] I0813 19:55:07.551261   53082 controller_utils.go:1029] Waiting for caches to sync for ClusterRoleAggregator controller
W0813 19:55:07.599] I0813 19:55:07.551475   53082 controllermanager.go:535] Started "podgc"
W0813 19:55:07.599] I0813 19:55:07.551740   53082 gc_controller.go:76] Starting GC controller
... skipping 40 lines ...
W0813 19:55:07.874] I0813 19:55:07.872141   53082 ttl_controller.go:116] Starting TTL controller
W0813 19:55:07.875] I0813 19:55:07.872160   53082 controller_utils.go:1029] Waiting for caches to sync for TTL controller
W0813 19:55:07.875] I0813 19:55:07.872831   53082 controllermanager.go:535] Started "persistentvolume-binder"
W0813 19:55:07.875] I0813 19:55:07.873323   53082 controllermanager.go:535] Started "cronjob"
W0813 19:55:07.876] W0813 19:55:07.873350   53082 controllermanager.go:514] "bootstrapsigner" is disabled
W0813 19:55:07.876] I0813 19:55:07.873964   53082 node_lifecycle_controller.go:77] Sending events to api server
W0813 19:55:07.876] E0813 19:55:07.874068   53082 core.go:175] failed to start cloud node lifecycle controller: no cloud provider provided
W0813 19:55:07.877] W0813 19:55:07.874084   53082 controllermanager.go:527] Skipping "cloud-node-lifecycle"
W0813 19:55:07.877] W0813 19:55:07.874100   53082 controllermanager.go:527] Skipping "ttl-after-finished"
W0813 19:55:07.877] W0813 19:55:07.874110   53082 controllermanager.go:527] Skipping "root-ca-cert-publisher"
W0813 19:55:07.877] I0813 19:55:07.874690   53082 controllermanager.go:535] Started "replicationcontroller"
W0813 19:55:07.878] I0813 19:55:07.875371   53082 pv_controller_base.go:282] Starting persistent volume controller
W0813 19:55:07.878] I0813 19:55:07.875451   53082 controller_utils.go:1029] Waiting for caches to sync for persistent volume controller
... skipping 59 lines ...
W0813 19:55:08.093] I0813 19:55:08.093391   53082 controller_utils.go:1036] Caches are synced for namespace controller
W0813 19:55:08.094] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0813 19:55:08.152] I0813 19:55:08.151510   53082 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0813 19:55:08.163] I0813 19:55:08.162515   53082 controller_utils.go:1036] Caches are synced for service account controller
W0813 19:55:08.163] I0813 19:55:08.162527   53082 controller_utils.go:1036] Caches are synced for certificate controller
W0813 19:55:08.165] I0813 19:55:08.165459   49606 controller.go:606] quota admission added evaluator for: serviceaccounts
W0813 19:55:08.177] E0813 19:55:08.176142   53082 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0813 19:55:08.278] NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
I0813 19:55:08.278] kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   40s
I0813 19:55:08.278] Recording: run_kubectl_version_tests
I0813 19:55:08.278] Running command: run_kubectl_version_tests
I0813 19:55:08.278] 
I0813 19:55:08.278] +++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 23 lines ...
W0813 19:55:08.740] I0813 19:55:08.664732   53082 disruption.go:341] Sending events to api server.
W0813 19:55:08.740] I0813 19:55:08.670514   53082 controller_utils.go:1036] Caches are synced for deployment controller
W0813 19:55:08.740] I0813 19:55:08.672233   53082 controller_utils.go:1036] Caches are synced for ReplicaSet controller
W0813 19:55:08.740] I0813 19:55:08.676050   53082 controller_utils.go:1036] Caches are synced for ReplicationController controller
W0813 19:55:08.740] I0813 19:55:08.690534   53082 controller_utils.go:1036] Caches are synced for PVC protection controller
W0813 19:55:08.740] I0813 19:55:08.691650   53082 controller_utils.go:1036] Caches are synced for endpoint controller
W0813 19:55:08.741] W0813 19:55:08.706709   53082 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0813 19:55:08.753] I0813 19:55:08.752345   53082 controller_utils.go:1036] Caches are synced for attach detach controller
W0813 19:55:08.753] I0813 19:55:08.753183   53082 controller_utils.go:1036] Caches are synced for expand controller
W0813 19:55:08.754] I0813 19:55:08.753914   53082 controller_utils.go:1036] Caches are synced for daemon sets controller
W0813 19:55:08.760] I0813 19:55:08.759672   53082 controller_utils.go:1036] Caches are synced for garbage collector controller
W0813 19:55:08.760] I0813 19:55:08.760087   53082 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W0813 19:55:08.771] I0813 19:55:08.770373   53082 controller_utils.go:1036] Caches are synced for taint controller
... skipping 64 lines ...
I0813 19:55:12.001] +++ working dir: /go/src/k8s.io/kubernetes
I0813 19:55:12.005] +++ command: run_RESTMapper_evaluation_tests
I0813 19:55:12.017] +++ [0813 19:55:12] Creating namespace namespace-1565726112-14832
I0813 19:55:12.098] namespace/namespace-1565726112-14832 created
I0813 19:55:12.174] Context "test" modified.
I0813 19:55:12.182] +++ [0813 19:55:12] Testing RESTMapper
I0813 19:55:12.297] +++ [0813 19:55:12] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0813 19:55:12.313] +++ exit code: 0
I0813 19:55:12.460] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0813 19:55:12.461] bindings                                                                      true         Binding
I0813 19:55:12.462] componentstatuses                 cs                                          false        ComponentStatus
I0813 19:55:12.462] configmaps                        cm                                          true         ConfigMap
I0813 19:55:12.462] endpoints                         ep                                          true         Endpoints
... skipping 661 lines ...
I0813 19:55:32.982] core.sh:241: Successful get pdb/test-pdb-1 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 2
I0813 19:55:33.061] (Bpoddisruptionbudget.policy/test-pdb-2 created
I0813 19:55:33.165] core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
I0813 19:55:33.248] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0813 19:55:33.352] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0813 19:55:33.433] (Bpoddisruptionbudget.policy/test-pdb-4 created
W0813 19:55:33.534] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0813 19:55:33.535] error: setting 'all' parameter but found a non empty selector. 
W0813 19:55:33.535] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0813 19:55:33.535] I0813 19:55:32.878524   49606 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0813 19:55:33.615] error: min-available and max-unavailable cannot be both specified
I0813 19:55:33.716] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0813 19:55:33.720] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:55:33.913] (Bpod/env-test-pod created
I0813 19:55:34.114] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0813 19:55:34.114] Name:         env-test-pod
I0813 19:55:34.114] Namespace:    test-kubectl-describe-pod
... skipping 176 lines ...
I0813 19:55:48.050] (Bpod/valid-pod patched
I0813 19:55:48.154] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0813 19:55:48.233] (Bpod/valid-pod patched
I0813 19:55:48.333] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0813 19:55:48.511] (Bpod/valid-pod patched
I0813 19:55:48.613] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0813 19:55:48.806] (B+++ [0813 19:55:48] "kubectl patch with resourceVersion 497" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0813 19:55:49.064] pod "valid-pod" deleted
I0813 19:55:49.079] pod/valid-pod replaced
I0813 19:55:49.185] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0813 19:55:49.361] (BSuccessful
I0813 19:55:49.362] message:error: --grace-period must have --force specified
I0813 19:55:49.362] has:\-\-grace-period must have \-\-force specified
I0813 19:55:49.534] Successful
I0813 19:55:49.534] message:error: --timeout must have --force specified
I0813 19:55:49.534] has:\-\-timeout must have \-\-force specified
W0813 19:55:49.698] W0813 19:55:49.697477   53082 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0813 19:55:49.799] node/node-v1-test created
I0813 19:55:49.877] node/node-v1-test replaced
I0813 19:55:49.982] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0813 19:55:50.066] (Bnode "node-v1-test" deleted
I0813 19:55:50.174] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0813 19:55:50.462] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 33 lines ...
I0813 19:55:52.653] namespace/namespace-1565726152-18133 created
I0813 19:55:52.731] Context "test" modified.
W0813 19:55:52.833] Edit cancelled, no changes made.
W0813 19:55:52.833] Edit cancelled, no changes made.
W0813 19:55:52.834] Edit cancelled, no changes made.
W0813 19:55:52.834] Edit cancelled, no changes made.
W0813 19:55:52.834] error: 'name' already has a value (valid-pod), and --overwrite is false
W0813 19:55:52.834] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0813 19:55:52.934] core.sh:610: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:55:53.011] (Bpod/redis-master created
I0813 19:55:53.015] pod/valid-pod created
I0813 19:55:53.126] core.sh:614: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0813 19:55:53.224] (Bcore.sh:618: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
... skipping 75 lines ...
I0813 19:55:59.905] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0813 19:55:59.908] +++ working dir: /go/src/k8s.io/kubernetes
I0813 19:55:59.910] +++ command: run_kubectl_create_error_tests
I0813 19:55:59.922] +++ [0813 19:55:59] Creating namespace namespace-1565726159-3283
I0813 19:56:00.000] namespace/namespace-1565726159-3283 created
I0813 19:56:00.076] Context "test" modified.
I0813 19:56:00.084] +++ [0813 19:56:00] Testing kubectl create with error
W0813 19:56:00.184] Error: must specify one of -f and -k
W0813 19:56:00.185] 
W0813 19:56:00.185] Create a resource from a file or from stdin.
W0813 19:56:00.185] 
W0813 19:56:00.185]  JSON and YAML formats are accepted.
W0813 19:56:00.185] 
W0813 19:56:00.185] Examples:
... skipping 41 lines ...
W0813 19:56:00.190] 
W0813 19:56:00.190] Usage:
W0813 19:56:00.190]   kubectl create -f FILENAME [options]
W0813 19:56:00.190] 
W0813 19:56:00.191] Use "kubectl <command> --help" for more information about a given command.
W0813 19:56:00.191] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0813 19:56:00.342] +++ [0813 19:56:00] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0813 19:56:00.442] kubectl convert is DEPRECATED and will be removed in a future version.
W0813 19:56:00.443] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0813 19:56:00.543] +++ exit code: 0
I0813 19:56:00.562] Recording: run_kubectl_apply_tests
I0813 19:56:00.562] Running command: run_kubectl_apply_tests
I0813 19:56:00.584] 
... skipping 19 lines ...
W0813 19:56:02.828] I0813 19:56:02.828177   49606 client.go:354] parsed scheme: ""
W0813 19:56:02.829] I0813 19:56:02.828323   49606 client.go:354] scheme "" not registered, fallback to default scheme
W0813 19:56:02.829] I0813 19:56:02.828431   49606 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0813 19:56:02.829] I0813 19:56:02.828515   49606 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0813 19:56:02.830] I0813 19:56:02.829280   49606 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0813 19:56:02.832] I0813 19:56:02.831778   49606 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0813 19:56:02.930] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0813 19:56:03.031] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0813 19:56:03.032] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0813 19:56:03.054] +++ exit code: 0
I0813 19:56:03.086] Recording: run_kubectl_run_tests
I0813 19:56:03.086] Running command: run_kubectl_run_tests
I0813 19:56:03.108] 
... skipping 95 lines ...
I0813 19:56:05.688] Context "test" modified.
I0813 19:56:05.695] +++ [0813 19:56:05] Testing kubectl create filter
I0813 19:56:05.789] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:56:05.957] (Bpod/selector-test-pod created
I0813 19:56:06.058] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0813 19:56:06.155] (BSuccessful
I0813 19:56:06.155] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0813 19:56:06.155] has:pods "selector-test-pod-dont-apply" not found
I0813 19:56:06.241] pod "selector-test-pod" deleted
I0813 19:56:06.260] +++ exit code: 0
I0813 19:56:06.298] Recording: run_kubectl_apply_deployments_tests
I0813 19:56:06.299] Running command: run_kubectl_apply_deployments_tests
I0813 19:56:06.321] 
... skipping 31 lines ...
W0813 19:56:08.812] I0813 19:56:08.717946   53082 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565726166-21938", Name:"nginx", UID:"32ef9628-3c2d-45bb-b8f0-5ac02b61dcc1", APIVersion:"apps/v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7dbc4d9f to 3
W0813 19:56:08.813] I0813 19:56:08.722966   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726166-21938", Name:"nginx-7dbc4d9f", UID:"da050128-2d2f-4141-98d9-79076bc52b6a", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-zg82d
W0813 19:56:08.813] I0813 19:56:08.726867   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726166-21938", Name:"nginx-7dbc4d9f", UID:"da050128-2d2f-4141-98d9-79076bc52b6a", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-pddz7
W0813 19:56:08.813] I0813 19:56:08.728292   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726166-21938", Name:"nginx-7dbc4d9f", UID:"da050128-2d2f-4141-98d9-79076bc52b6a", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-rp2dq
I0813 19:56:08.914] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0813 19:56:13.081] (BSuccessful
I0813 19:56:13.081] message:Error from server (Conflict): error when applying patch:
I0813 19:56:13.082] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565726166-21938\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0813 19:56:13.082] to:
I0813 19:56:13.082] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0813 19:56:13.082] Name: "nginx", Namespace: "namespace-1565726166-21938"
I0813 19:56:13.085] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565726166-21938\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-08-13T19:56:08Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]] "k:{\"type\":\"Progressing\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-13T19:56:08Z"] map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map["f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-08-13T19:56:08Z"]] "name":"nginx" "namespace":"namespace-1565726166-21938" "resourceVersion":"592" "selfLink":"/apis/apps/v1/namespaces/namespace-1565726166-21938/deployments/nginx" "uid":"32ef9628-3c2d-45bb-b8f0-5ac02b61dcc1"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-08-13T19:56:08Z" "lastUpdateTime":"2019-08-13T19:56:08Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-08-13T19:56:08Z" "lastUpdateTime":"2019-08-13T19:56:08Z" "message":"ReplicaSet \"nginx-7dbc4d9f\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0813 19:56:13.085] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0813 19:56:13.085] has:Error from server (Conflict)
W0813 19:56:14.276] I0813 19:56:14.276093   53082 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1565726157-24109
W0813 19:56:17.399] E0813 19:56:17.398439   53082 replica_set.go:450] Sync "namespace-1565726166-21938/nginx-7dbc4d9f" failed with Operation cannot be fulfilled on replicasets.apps "nginx-7dbc4d9f": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1565726166-21938/nginx-7dbc4d9f, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: da050128-2d2f-4141-98d9-79076bc52b6a, UID in object meta: 
I0813 19:56:18.375] deployment.apps/nginx configured
W0813 19:56:18.476] I0813 19:56:18.380741   53082 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565726166-21938", Name:"nginx", UID:"9e1b75e1-98f9-49d9-bff1-8dc41062af65", APIVersion:"apps/v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0813 19:56:18.477] I0813 19:56:18.385007   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726166-21938", Name:"nginx-594f77b9f6", UID:"7d310d3f-c821-4182-81f8-4eed4422c00d", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-54k48
W0813 19:56:18.477] I0813 19:56:18.392665   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726166-21938", Name:"nginx-594f77b9f6", UID:"7d310d3f-c821-4182-81f8-4eed4422c00d", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-75kqz
W0813 19:56:18.477] I0813 19:56:18.393801   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726166-21938", Name:"nginx-594f77b9f6", UID:"7d310d3f-c821-4182-81f8-4eed4422c00d", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-77gf4
I0813 19:56:18.578] Successful
I0813 19:56:18.578] message:        "name": "nginx2"
I0813 19:56:18.578]           "name": "nginx2"
I0813 19:56:18.579] has:"name": "nginx2"
W0813 19:56:22.761] E0813 19:56:22.760754   53082 replica_set.go:450] Sync "namespace-1565726166-21938/nginx-594f77b9f6" failed with Operation cannot be fulfilled on replicasets.apps "nginx-594f77b9f6": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1565726166-21938/nginx-594f77b9f6, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 7d310d3f-c821-4182-81f8-4eed4422c00d, UID in object meta: 
W0813 19:56:23.750] I0813 19:56:23.749159   53082 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565726166-21938", Name:"nginx", UID:"ccfaa32a-d15b-43f7-8e79-dcdc9c92ca8c", APIVersion:"apps/v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0813 19:56:23.755] I0813 19:56:23.754517   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726166-21938", Name:"nginx-594f77b9f6", UID:"efa8560a-2154-48ad-b940-cd7d68e4e8e7", APIVersion:"apps/v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-b26wm
W0813 19:56:23.759] I0813 19:56:23.758824   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726166-21938", Name:"nginx-594f77b9f6", UID:"efa8560a-2154-48ad-b940-cd7d68e4e8e7", APIVersion:"apps/v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-pxkbr
W0813 19:56:23.761] I0813 19:56:23.760421   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726166-21938", Name:"nginx-594f77b9f6", UID:"efa8560a-2154-48ad-b940-cd7d68e4e8e7", APIVersion:"apps/v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-bkg2n
I0813 19:56:23.861] Successful
I0813 19:56:23.862] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 159 lines ...
I0813 19:56:25.845] +++ [0813 19:56:25] Creating namespace namespace-1565726185-204
I0813 19:56:25.921] namespace/namespace-1565726185-204 created
I0813 19:56:26.000] Context "test" modified.
I0813 19:56:26.007] +++ [0813 19:56:26] Testing kubectl get
I0813 19:56:26.103] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:56:26.193] (BSuccessful
I0813 19:56:26.194] message:Error from server (NotFound): pods "abc" not found
I0813 19:56:26.195] has:pods "abc" not found
I0813 19:56:26.291] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:56:26.384] (BSuccessful
I0813 19:56:26.384] message:Error from server (NotFound): pods "abc" not found
I0813 19:56:26.385] has:pods "abc" not found
I0813 19:56:26.477] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:56:26.567] (BSuccessful
I0813 19:56:26.567] message:{
I0813 19:56:26.567]     "apiVersion": "v1",
I0813 19:56:26.568]     "items": [],
... skipping 23 lines ...
I0813 19:56:26.928] has not:No resources found
I0813 19:56:27.021] Successful
I0813 19:56:27.022] message:NAME
I0813 19:56:27.022] has not:No resources found
I0813 19:56:27.121] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:56:27.227] (BSuccessful
I0813 19:56:27.228] message:error: the server doesn't have a resource type "foobar"
I0813 19:56:27.228] has not:No resources found
I0813 19:56:27.320] Successful
I0813 19:56:27.321] message:No resources found in namespace-1565726185-204 namespace.
I0813 19:56:27.321] has:No resources found
I0813 19:56:27.413] Successful
I0813 19:56:27.414] message:
I0813 19:56:27.414] has not:No resources found
I0813 19:56:27.509] Successful
I0813 19:56:27.510] message:No resources found in namespace-1565726185-204 namespace.
I0813 19:56:27.510] has:No resources found
I0813 19:56:27.609] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:56:27.707] (BSuccessful
I0813 19:56:27.708] message:Error from server (NotFound): pods "abc" not found
I0813 19:56:27.708] has:pods "abc" not found
I0813 19:56:27.709] FAIL!
I0813 19:56:27.709] message:Error from server (NotFound): pods "abc" not found
I0813 19:56:27.709] has not:List
I0813 19:56:27.710] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0813 19:56:27.837] Successful
I0813 19:56:27.838] message:I0813 19:56:27.784719   63654 loader.go:375] Config loaded from file:  /tmp/tmp.7n2JYRfix6/.kube/config
I0813 19:56:27.838] I0813 19:56:27.786574   63654 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0813 19:56:27.839] I0813 19:56:27.807636   63654 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0813 19:56:33.507] Successful
I0813 19:56:33.508] message:NAME    DATA   AGE
I0813 19:56:33.508] one     0      0s
I0813 19:56:33.508] three   0      0s
I0813 19:56:33.508] two     0      0s
I0813 19:56:33.508] STATUS    REASON          MESSAGE
I0813 19:56:33.509] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0813 19:56:33.509] has not:watch is only supported on individual resources
I0813 19:56:34.605] Successful
I0813 19:56:34.605] message:STATUS    REASON          MESSAGE
I0813 19:56:34.606] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0813 19:56:34.606] has not:watch is only supported on individual resources
I0813 19:56:34.611] +++ [0813 19:56:34] Creating namespace namespace-1565726194-23582
I0813 19:56:34.694] namespace/namespace-1565726194-23582 created
I0813 19:56:34.772] Context "test" modified.
I0813 19:56:34.872] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:56:35.049] (Bpod/valid-pod created
... skipping 104 lines ...
I0813 19:56:35.153] }
I0813 19:56:35.240] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0813 19:56:35.500] (B<no value>Successful
I0813 19:56:35.501] message:valid-pod:
I0813 19:56:35.501] has:valid-pod:
I0813 19:56:35.593] Successful
I0813 19:56:35.593] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0813 19:56:35.594] 	template was:
I0813 19:56:35.594] 		{.missing}
I0813 19:56:35.594] 	object given to jsonpath engine was:
I0813 19:56:35.596] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-08-13T19:56:35Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-08-13T19:56:35Z"}}, "name":"valid-pod", "namespace":"namespace-1565726194-23582", "resourceVersion":"688", "selfLink":"/api/v1/namespaces/namespace-1565726194-23582/pods/valid-pod", "uid":"e9a8972a-2bcb-44c7-9206-072df66b6ce5"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0813 19:56:35.596] has:missing is not found
I0813 19:56:35.686] Successful
I0813 19:56:35.687] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0813 19:56:35.687] 	template was:
I0813 19:56:35.688] 		{{.missing}}
I0813 19:56:35.688] 	raw data was:
I0813 19:56:35.689] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-08-13T19:56:35Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-08-13T19:56:35Z"}],"name":"valid-pod","namespace":"namespace-1565726194-23582","resourceVersion":"688","selfLink":"/api/v1/namespaces/namespace-1565726194-23582/pods/valid-pod","uid":"e9a8972a-2bcb-44c7-9206-072df66b6ce5"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0813 19:56:35.689] 	object given to template engine was:
I0813 19:56:35.690] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-08-13T19:56:35Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-08-13T19:56:35Z]] name:valid-pod namespace:namespace-1565726194-23582 resourceVersion:688 selfLink:/api/v1/namespaces/namespace-1565726194-23582/pods/valid-pod uid:e9a8972a-2bcb-44c7-9206-072df66b6ce5] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0813 19:56:35.691] has:map has no entry for key "missing"
W0813 19:56:35.791] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0813 19:56:36.777] Successful
I0813 19:56:36.778] message:NAME        READY   STATUS    RESTARTS   AGE
I0813 19:56:36.778] valid-pod   0/1     Pending   0          0s
I0813 19:56:36.778] STATUS      REASON          MESSAGE
I0813 19:56:36.778] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0813 19:56:36.778] has:STATUS
I0813 19:56:36.779] Successful
I0813 19:56:36.779] message:NAME        READY   STATUS    RESTARTS   AGE
I0813 19:56:36.779] valid-pod   0/1     Pending   0          0s
I0813 19:56:36.779] STATUS      REASON          MESSAGE
I0813 19:56:36.780] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0813 19:56:36.780] has:valid-pod
I0813 19:56:37.878] Successful
I0813 19:56:37.878] message:pod/valid-pod
I0813 19:56:37.878] has not:STATUS
I0813 19:56:37.879] Successful
I0813 19:56:37.879] message:pod/valid-pod
... skipping 144 lines ...
I0813 19:56:38.989] status:
I0813 19:56:38.990]   phase: Pending
I0813 19:56:38.990]   qosClass: Guaranteed
I0813 19:56:38.990] ---
I0813 19:56:38.990] has:name: valid-pod
I0813 19:56:39.070] Successful
I0813 19:56:39.071] message:Error from server (NotFound): pods "invalid-pod" not found
I0813 19:56:39.071] has:"invalid-pod" not found
I0813 19:56:39.160] pod "valid-pod" deleted
I0813 19:56:39.262] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:56:39.445] (Bpod/redis-master created
I0813 19:56:39.450] pod/valid-pod created
I0813 19:56:39.552] Successful
... skipping 35 lines ...
I0813 19:56:40.804] +++ command: run_kubectl_exec_pod_tests
I0813 19:56:40.817] +++ [0813 19:56:40] Creating namespace namespace-1565726200-11756
I0813 19:56:40.897] namespace/namespace-1565726200-11756 created
I0813 19:56:40.980] Context "test" modified.
I0813 19:56:40.987] +++ [0813 19:56:40] Testing kubectl exec POD COMMAND
I0813 19:56:41.077] Successful
I0813 19:56:41.077] message:Error from server (NotFound): pods "abc" not found
I0813 19:56:41.078] has:pods "abc" not found
I0813 19:56:41.251] pod/test-pod created
I0813 19:56:41.358] Successful
I0813 19:56:41.359] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0813 19:56:41.359] has not:pods "test-pod" not found
I0813 19:56:41.361] Successful
I0813 19:56:41.361] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0813 19:56:41.361] has not:pod or type/name must be specified
I0813 19:56:41.449] pod "test-pod" deleted
I0813 19:56:41.470] +++ exit code: 0
I0813 19:56:41.503] Recording: run_kubectl_exec_resource_name_tests
I0813 19:56:41.504] Running command: run_kubectl_exec_resource_name_tests
I0813 19:56:41.526] 
... skipping 2 lines ...
I0813 19:56:41.532] +++ command: run_kubectl_exec_resource_name_tests
I0813 19:56:41.545] +++ [0813 19:56:41] Creating namespace namespace-1565726201-11585
I0813 19:56:41.626] namespace/namespace-1565726201-11585 created
I0813 19:56:41.702] Context "test" modified.
I0813 19:56:41.710] +++ [0813 19:56:41] Testing kubectl exec TYPE/NAME COMMAND
I0813 19:56:41.815] Successful
I0813 19:56:41.815] message:error: the server doesn't have a resource type "foo"
I0813 19:56:41.815] has:error:
I0813 19:56:41.907] Successful
I0813 19:56:41.908] message:Error from server (NotFound): deployments.apps "bar" not found
I0813 19:56:41.908] has:"bar" not found
I0813 19:56:42.076] pod/test-pod created
I0813 19:56:42.254] replicaset.apps/frontend created
W0813 19:56:42.355] I0813 19:56:42.260475   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726201-11585", Name:"frontend", UID:"5ad951a5-2862-4adb-a85a-eb1cb4155508", APIVersion:"apps/v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6jr7l
W0813 19:56:42.356] I0813 19:56:42.264488   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726201-11585", Name:"frontend", UID:"5ad951a5-2862-4adb-a85a-eb1cb4155508", APIVersion:"apps/v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-j2vbv
W0813 19:56:42.356] I0813 19:56:42.265216   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726201-11585", Name:"frontend", UID:"5ad951a5-2862-4adb-a85a-eb1cb4155508", APIVersion:"apps/v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-frf47
I0813 19:56:42.457] configmap/test-set-env-config created
I0813 19:56:42.521] Successful
I0813 19:56:42.522] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0813 19:56:42.522] has:not implemented
I0813 19:56:42.621] Successful
I0813 19:56:42.621] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0813 19:56:42.622] has not:not found
I0813 19:56:42.622] Successful
I0813 19:56:42.623] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0813 19:56:42.623] has not:pod or type/name must be specified
I0813 19:56:42.732] Successful
I0813 19:56:42.733] message:Error from server (BadRequest): pod frontend-6jr7l does not have a host assigned
I0813 19:56:42.734] has not:not found
I0813 19:56:42.734] Successful
I0813 19:56:42.735] message:Error from server (BadRequest): pod frontend-6jr7l does not have a host assigned
I0813 19:56:42.735] has not:pod or type/name must be specified
I0813 19:56:42.818] pod "test-pod" deleted
I0813 19:56:42.905] replicaset.apps "frontend" deleted
I0813 19:56:42.996] configmap "test-set-env-config" deleted
I0813 19:56:43.014] +++ exit code: 0
I0813 19:56:43.046] Recording: run_create_secret_tests
I0813 19:56:43.047] Running command: run_create_secret_tests
I0813 19:56:43.066] 
I0813 19:56:43.070] +++ Running case: test-cmd.run_create_secret_tests 
I0813 19:56:43.072] +++ working dir: /go/src/k8s.io/kubernetes
I0813 19:56:43.074] +++ command: run_create_secret_tests
I0813 19:56:43.170] Successful
I0813 19:56:43.171] message:Error from server (NotFound): secrets "mysecret" not found
I0813 19:56:43.171] has:secrets "mysecret" not found
I0813 19:56:43.336] Successful
I0813 19:56:43.337] message:Error from server (NotFound): secrets "mysecret" not found
I0813 19:56:43.337] has:secrets "mysecret" not found
I0813 19:56:43.338] Successful
I0813 19:56:43.338] message:user-specified
I0813 19:56:43.338] has:user-specified
I0813 19:56:43.414] Successful
I0813 19:56:43.493] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"86f023b4-14e3-49a9-bdae-3dafd476784b","resourceVersion":"762","creationTimestamp":"2019-08-13T19:56:43Z"}}
... skipping 2 lines ...
I0813 19:56:43.669] has:uid
I0813 19:56:43.744] Successful
I0813 19:56:43.745] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"86f023b4-14e3-49a9-bdae-3dafd476784b","resourceVersion":"763","creationTimestamp":"2019-08-13T19:56:43Z","managedFields":[{"manager":"kubectl","operation":"Update","apiVersion":"v1","time":"2019-08-13T19:56:43Z","fields":{"f:data":{"f:key1":{},".":{}}}}]},"data":{"key1":"config1"}}
I0813 19:56:43.746] has:config1
I0813 19:56:43.822] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"86f023b4-14e3-49a9-bdae-3dafd476784b"}}
I0813 19:56:43.924] Successful
I0813 19:56:43.924] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0813 19:56:43.924] has:configmaps "tester-update-cm" not found
I0813 19:56:43.939] +++ exit code: 0
I0813 19:56:43.974] Recording: run_kubectl_create_kustomization_directory_tests
I0813 19:56:43.975] Running command: run_kubectl_create_kustomization_directory_tests
I0813 19:56:43.994] 
I0813 19:56:43.996] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 158 lines ...
I0813 19:56:46.794] valid-pod   0/1     Pending   0          0s
I0813 19:56:46.794] has:valid-pod
I0813 19:56:47.887] Successful
I0813 19:56:47.888] message:NAME        READY   STATUS    RESTARTS   AGE
I0813 19:56:47.888] valid-pod   0/1     Pending   0          0s
I0813 19:56:47.888] STATUS      REASON          MESSAGE
I0813 19:56:47.889] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0813 19:56:47.889] has:Timeout exceeded while reading body
I0813 19:56:47.984] Successful
I0813 19:56:47.984] message:NAME        READY   STATUS    RESTARTS   AGE
I0813 19:56:47.984] valid-pod   0/1     Pending   0          1s
I0813 19:56:47.985] has:valid-pod
I0813 19:56:48.062] Successful
I0813 19:56:48.062] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0813 19:56:48.062] has:Invalid timeout value
I0813 19:56:48.158] pod "valid-pod" deleted
I0813 19:56:48.179] +++ exit code: 0
I0813 19:56:48.222] Recording: run_crd_tests
I0813 19:56:48.223] Running command: run_crd_tests
I0813 19:56:48.247] 
... skipping 245 lines ...
I0813 19:56:53.103] foo.company.com/test patched
I0813 19:56:53.204] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0813 19:56:53.297] (Bfoo.company.com/test patched
I0813 19:56:53.397] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0813 19:56:53.487] (Bfoo.company.com/test patched
I0813 19:56:53.584] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0813 19:56:53.748] (B+++ [0813 19:56:53] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0813 19:56:53.818] {
I0813 19:56:53.819]     "apiVersion": "company.com/v1",
I0813 19:56:53.819]     "kind": "Foo",
I0813 19:56:53.819]     "metadata": {
I0813 19:56:53.819]         "annotations": {
I0813 19:56:53.819]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 345 lines ...
I0813 19:57:16.826] (Bnamespace/non-native-resources created
I0813 19:57:16.998] bar.company.com/test created
I0813 19:57:17.100] crd.sh:455: Successful get bars {{len .items}}: 1
I0813 19:57:17.183] (Bnamespace "non-native-resources" deleted
I0813 19:57:22.415] crd.sh:458: Successful get bars {{len .items}}: 0
I0813 19:57:22.592] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0813 19:57:22.693] Error from server (NotFound): namespaces "non-native-resources" not found
I0813 19:57:22.793] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0813 19:57:22.845] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0813 19:57:22.956] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0813 19:57:22.991] +++ exit code: 0
I0813 19:57:23.031] Recording: run_cmd_with_img_tests
I0813 19:57:23.031] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0813 19:57:23.373] I0813 19:57:23.372418   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726243-25114", Name:"test1-9797f89d8", UID:"48667f63-40cb-4443-ae2d-b4d17bc3c92d", APIVersion:"apps/v1", ResourceVersion:"918", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-9797f89d8-sxmzm
I0813 19:57:23.474] Successful
I0813 19:57:23.475] message:deployment.apps/test1 created
I0813 19:57:23.476] has:deployment.apps/test1 created
I0813 19:57:23.476] deployment.apps "test1" deleted
I0813 19:57:23.556] Successful
I0813 19:57:23.557] message:error: Invalid image name "InvalidImageName": invalid reference format
I0813 19:57:23.557] has:error: Invalid image name "InvalidImageName": invalid reference format
I0813 19:57:23.603] +++ exit code: 0
I0813 19:57:23.644] +++ [0813 19:57:23] Testing recursive resources
I0813 19:57:23.648] +++ [0813 19:57:23] Creating namespace namespace-1565726243-5778
I0813 19:57:23.733] namespace/namespace-1565726243-5778 created
I0813 19:57:23.811] Context "test" modified.
I0813 19:57:23.911] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:57:24.238] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:24.241] (BSuccessful
I0813 19:57:24.241] message:pod/busybox0 created
I0813 19:57:24.242] pod/busybox1 created
I0813 19:57:24.242] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0813 19:57:24.242] has:error validating data: kind not set
I0813 19:57:24.339] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:24.532] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0813 19:57:24.535] (BSuccessful
I0813 19:57:24.536] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 19:57:24.536] has:Object 'Kind' is missing
I0813 19:57:24.635] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:24.913] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0813 19:57:24.915] (BSuccessful
I0813 19:57:24.915] message:pod/busybox0 replaced
I0813 19:57:24.915] pod/busybox1 replaced
I0813 19:57:24.915] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0813 19:57:24.916] has:error validating data: kind not set
W0813 19:57:25.016] W0813 19:57:23.605478   49606 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0813 19:57:25.016] E0813 19:57:23.607811   53082 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:25.017] W0813 19:57:23.721462   49606 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0813 19:57:25.017] E0813 19:57:23.723300   53082 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:25.017] W0813 19:57:23.857030   49606 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0813 19:57:25.017] E0813 19:57:23.859173   53082 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:25.017] W0813 19:57:23.968283   49606 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0813 19:57:25.018] E0813 19:57:23.970242   53082 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:25.018] E0813 19:57:24.609518   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:25.018] E0813 19:57:24.724744   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:25.018] E0813 19:57:24.860908   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:25.019] E0813 19:57:24.971742   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:25.119] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:25.127] (BSuccessful
I0813 19:57:25.128] message:Name:         busybox0
I0813 19:57:25.128] Namespace:    namespace-1565726243-5778
I0813 19:57:25.128] Priority:     0
I0813 19:57:25.128] Node:         <none>
... skipping 159 lines ...
I0813 19:57:25.143] has:Object 'Kind' is missing
I0813 19:57:25.232] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:25.428] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0813 19:57:25.430] (BSuccessful
I0813 19:57:25.431] message:pod/busybox0 annotated
I0813 19:57:25.431] pod/busybox1 annotated
I0813 19:57:25.431] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 19:57:25.431] has:Object 'Kind' is missing
I0813 19:57:25.523] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:25.811] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0813 19:57:25.814] (BSuccessful
I0813 19:57:25.814] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0813 19:57:25.814] pod/busybox0 configured
I0813 19:57:25.814] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0813 19:57:25.814] pod/busybox1 configured
I0813 19:57:25.815] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0813 19:57:25.815] has:error validating data: kind not set
I0813 19:57:25.906] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:57:26.083] (Bdeployment.apps/nginx created
W0813 19:57:26.184] E0813 19:57:25.611052   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:26.184] E0813 19:57:25.726389   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:26.185] E0813 19:57:25.862726   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:26.185] E0813 19:57:25.973409   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:26.185] I0813 19:57:26.088731   53082 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565726243-5778", Name:"nginx", UID:"0ce82716-f08a-495e-a01d-556b9b208945", APIVersion:"apps/v1", ResourceVersion:"943", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0813 19:57:26.186] I0813 19:57:26.094653   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726243-5778", Name:"nginx-bbbbb95b5", UID:"1d35aa45-8fce-4db6-87bb-a73df52e299f", APIVersion:"apps/v1", ResourceVersion:"944", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-59877
W0813 19:57:26.186] I0813 19:57:26.099180   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726243-5778", Name:"nginx-bbbbb95b5", UID:"1d35aa45-8fce-4db6-87bb-a73df52e299f", APIVersion:"apps/v1", ResourceVersion:"944", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-vmlhl
W0813 19:57:26.187] I0813 19:57:26.100984   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726243-5778", Name:"nginx-bbbbb95b5", UID:"1d35aa45-8fce-4db6-87bb-a73df52e299f", APIVersion:"apps/v1", ResourceVersion:"944", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-wb9v9
I0813 19:57:26.287] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0813 19:57:26.291] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 41 lines ...
I0813 19:57:26.472]       terminationGracePeriodSeconds: 30
I0813 19:57:26.472] status: {}
I0813 19:57:26.472] has:extensions/v1beta1
I0813 19:57:26.549] deployment.apps "nginx" deleted
W0813 19:57:26.650] kubectl convert is DEPRECATED and will be removed in a future version.
W0813 19:57:26.650] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0813 19:57:26.651] E0813 19:57:26.613172   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:26.729] E0813 19:57:26.728487   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:26.830] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:26.839] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:26.841] (BSuccessful
I0813 19:57:26.842] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0813 19:57:26.842] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0813 19:57:26.843] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 19:57:26.843] has:Object 'Kind' is missing
I0813 19:57:26.939] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:27.029] (BSuccessful
I0813 19:57:27.030] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 19:57:27.030] has:busybox0:busybox1:
I0813 19:57:27.032] Successful
I0813 19:57:27.032] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 19:57:27.032] has:Object 'Kind' is missing
I0813 19:57:27.129] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:27.224] (Bpod/busybox0 labeled
I0813 19:57:27.225] pod/busybox1 labeled
I0813 19:57:27.225] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 19:57:27.319] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0813 19:57:27.321] (BSuccessful
I0813 19:57:27.321] message:pod/busybox0 labeled
I0813 19:57:27.321] pod/busybox1 labeled
I0813 19:57:27.322] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 19:57:27.322] has:Object 'Kind' is missing
I0813 19:57:27.420] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:27.512] (Bpod/busybox0 patched
I0813 19:57:27.512] pod/busybox1 patched
I0813 19:57:27.513] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 19:57:27.608] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0813 19:57:27.611] (BSuccessful
I0813 19:57:27.611] message:pod/busybox0 patched
I0813 19:57:27.611] pod/busybox1 patched
I0813 19:57:27.612] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 19:57:27.612] has:Object 'Kind' is missing
I0813 19:57:27.709] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:27.896] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:57:27.898] (BSuccessful
I0813 19:57:27.899] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0813 19:57:27.899] pod "busybox0" force deleted
I0813 19:57:27.899] pod "busybox1" force deleted
I0813 19:57:27.900] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 19:57:27.900] has:Object 'Kind' is missing
I0813 19:57:27.995] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:57:28.168] (Breplicationcontroller/busybox0 created
I0813 19:57:28.172] replicationcontroller/busybox1 created
W0813 19:57:28.274] E0813 19:57:26.864352   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:28.274] E0813 19:57:26.975399   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:28.275] E0813 19:57:27.614739   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:28.275] E0813 19:57:27.730018   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:28.275] E0813 19:57:27.865945   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:28.275] E0813 19:57:27.976530   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:28.276] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0813 19:57:28.276] I0813 19:57:28.172768   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565726243-5778", Name:"busybox0", UID:"26047dd3-f645-4656-b927-274490ac6fce", APIVersion:"v1", ResourceVersion:"974", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-9vmf7
W0813 19:57:28.276] I0813 19:57:28.177020   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565726243-5778", Name:"busybox1", UID:"f8f73c14-949c-4b1c-80ff-605e981ae1ed", APIVersion:"v1", ResourceVersion:"976", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-hhkrd
I0813 19:57:28.377] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:28.381] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:28.476] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0813 19:57:28.569] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0813 19:57:28.769] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0813 19:57:28.868] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0813 19:57:28.870] (BSuccessful
I0813 19:57:28.870] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0813 19:57:28.870] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0813 19:57:28.871] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 19:57:28.871] has:Object 'Kind' is missing
I0813 19:57:28.954] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0813 19:57:29.049] horizontalpodautoscaler.autoscaling "busybox1" deleted
W0813 19:57:29.150] E0813 19:57:28.616383   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:29.151] E0813 19:57:28.731691   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:29.151] E0813 19:57:28.867723   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:29.151] E0813 19:57:28.978360   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:29.252] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:29.252] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0813 19:57:29.345] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0813 19:57:29.543] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0813 19:57:29.636] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0813 19:57:29.638] (BSuccessful
I0813 19:57:29.639] message:service/busybox0 exposed
I0813 19:57:29.639] service/busybox1 exposed
I0813 19:57:29.640] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 19:57:29.640] has:Object 'Kind' is missing
I0813 19:57:29.736] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:29.832] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0813 19:57:29.927] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0813 19:57:30.149] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0813 19:57:30.244] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0813 19:57:30.246] (BSuccessful
I0813 19:57:30.247] message:replicationcontroller/busybox0 scaled
I0813 19:57:30.247] replicationcontroller/busybox1 scaled
I0813 19:57:30.248] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 19:57:30.248] has:Object 'Kind' is missing
I0813 19:57:30.343] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:30.533] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:57:30.535] (BSuccessful
I0813 19:57:30.536] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0813 19:57:30.536] replicationcontroller "busybox0" force deleted
I0813 19:57:30.537] replicationcontroller "busybox1" force deleted
I0813 19:57:30.537] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 19:57:30.538] has:Object 'Kind' is missing
I0813 19:57:30.631] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:57:30.806] (Bdeployment.apps/nginx1-deployment created
I0813 19:57:30.812] deployment.apps/nginx0-deployment created
W0813 19:57:30.913] E0813 19:57:29.618179   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:30.913] E0813 19:57:29.733340   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:30.914] E0813 19:57:29.869361   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:30.914] E0813 19:57:29.980066   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:30.914] I0813 19:57:30.040023   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565726243-5778", Name:"busybox0", UID:"26047dd3-f645-4656-b927-274490ac6fce", APIVersion:"v1", ResourceVersion:"996", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-z8rpm
W0813 19:57:30.915] I0813 19:57:30.049860   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565726243-5778", Name:"busybox1", UID:"f8f73c14-949c-4b1c-80ff-605e981ae1ed", APIVersion:"v1", ResourceVersion:"999", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-p4kw8
W0813 19:57:30.915] E0813 19:57:30.619534   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:30.915] E0813 19:57:30.735255   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:30.915] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0813 19:57:30.916] I0813 19:57:30.811962   53082 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565726243-5778", Name:"nginx1-deployment", UID:"470bbcfd-2277-4146-a506-10bcd3da2658", APIVersion:"apps/v1", ResourceVersion:"1016", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-84f7f49fb7 to 2
W0813 19:57:30.916] I0813 19:57:30.816959   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726243-5778", Name:"nginx1-deployment-84f7f49fb7", UID:"ba07f1d8-d664-40d1-adb4-d6a4a2909693", APIVersion:"apps/v1", ResourceVersion:"1018", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-9hxkd
W0813 19:57:30.917] I0813 19:57:30.824616   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726243-5778", Name:"nginx1-deployment-84f7f49fb7", UID:"ba07f1d8-d664-40d1-adb4-d6a4a2909693", APIVersion:"apps/v1", ResourceVersion:"1018", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-ks8jv
W0813 19:57:30.917] I0813 19:57:30.827639   53082 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565726243-5778", Name:"nginx0-deployment", UID:"785c1446-c5e4-4668-877f-7c1f3338d6f8", APIVersion:"apps/v1", ResourceVersion:"1017", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57475bf54d to 2
W0813 19:57:30.918] I0813 19:57:30.837061   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726243-5778", Name:"nginx0-deployment-57475bf54d", UID:"14b783d9-9c56-46bc-abbd-179a287f1316", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-dh6xs
W0813 19:57:30.918] I0813 19:57:30.844149   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565726243-5778", Name:"nginx0-deployment-57475bf54d", UID:"14b783d9-9c56-46bc-abbd-179a287f1316", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-xsszj
W0813 19:57:30.918] E0813 19:57:30.870521   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:30.982] E0813 19:57:30.981780   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:31.083] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0813 19:57:31.084] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0813 19:57:31.287] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0813 19:57:31.291] (BSuccessful
I0813 19:57:31.291] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0813 19:57:31.292] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0813 19:57:31.292] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0813 19:57:31.292] has:Object 'Kind' is missing
I0813 19:57:31.408] deployment.apps/nginx1-deployment paused
I0813 19:57:31.415] deployment.apps/nginx0-deployment paused
I0813 19:57:31.531] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0813 19:57:31.535] (BSuccessful
I0813 19:57:31.536] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0813 19:57:31.536] has:Object 'Kind' is missing
I0813 19:57:31.636] deployment.apps/nginx1-deployment resumed
I0813 19:57:31.643] deployment.apps/nginx0-deployment resumed
W0813 19:57:31.743] E0813 19:57:31.620807   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:31.744] E0813 19:57:31.736649   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:31.844] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0813 19:57:31.845] (BSuccessful
I0813 19:57:31.846] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0813 19:57:31.846] has:Object 'Kind' is missing
I0813 19:57:31.880] Successful
I0813 19:57:31.881] message:deployment.apps/nginx1-deployment 
I0813 19:57:31.881] REVISION  CHANGE-CAUSE
I0813 19:57:31.881] 1         <none>
I0813 19:57:31.881] 
I0813 19:57:31.881] deployment.apps/nginx0-deployment 
I0813 19:57:31.882] REVISION  CHANGE-CAUSE
I0813 19:57:31.882] 1         <none>
I0813 19:57:31.882] 
I0813 19:57:31.882] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0813 19:57:31.883] has:nginx0-deployment
I0813 19:57:31.883] Successful
I0813 19:57:31.884] message:deployment.apps/nginx1-deployment 
I0813 19:57:31.884] REVISION  CHANGE-CAUSE
I0813 19:57:31.884] 1         <none>
I0813 19:57:31.884] 
I0813 19:57:31.884] deployment.apps/nginx0-deployment 
I0813 19:57:31.884] REVISION  CHANGE-CAUSE
I0813 19:57:31.884] 1         <none>
I0813 19:57:31.885] 
I0813 19:57:31.885] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0813 19:57:31.885] has:nginx1-deployment
I0813 19:57:31.885] Successful
I0813 19:57:31.885] message:deployment.apps/nginx1-deployment 
I0813 19:57:31.886] REVISION  CHANGE-CAUSE
I0813 19:57:31.886] 1         <none>
I0813 19:57:31.886] 
I0813 19:57:31.886] deployment.apps/nginx0-deployment 
I0813 19:57:31.886] REVISION  CHANGE-CAUSE
I0813 19:57:31.886] 1         <none>
I0813 19:57:31.886] 
I0813 19:57:31.887] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0813 19:57:31.887] has:Object 'Kind' is missing
I0813 19:57:31.971] deployment.apps "nginx1-deployment" force deleted
I0813 19:57:31.976] deployment.apps "nginx0-deployment" force deleted
W0813 19:57:32.077] E0813 19:57:31.871764   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:32.078] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0813 19:57:32.078] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0813 19:57:32.079] E0813 19:57:31.983235   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:32.623] E0813 19:57:32.622631   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:32.738] E0813 19:57:32.738084   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:32.874] E0813 19:57:32.873506   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:32.985] E0813 19:57:32.984937   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:33.086] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:57:33.259] (Breplicationcontroller/busybox0 created
I0813 19:57:33.266] replicationcontroller/busybox1 created
W0813 19:57:33.367] I0813 19:57:33.263945   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565726243-5778", Name:"busybox0", UID:"26243627-5abf-48ef-87c6-4537d38075e3", APIVersion:"v1", ResourceVersion:"1065", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-qpltl
W0813 19:57:33.367] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0813 19:57:33.367] I0813 19:57:33.270622   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565726243-5778", Name:"busybox1", UID:"61ccd027-f404-4116-b367-703057687e1c", APIVersion:"v1", ResourceVersion:"1067", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-5wkhc
I0813 19:57:33.468] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 19:57:33.471] (BSuccessful
I0813 19:57:33.471] message:no rollbacker has been implemented for "ReplicationController"
I0813 19:57:33.471] no rollbacker has been implemented for "ReplicationController"
I0813 19:57:33.472] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
I0813 19:57:33.473] message:no rollbacker has been implemented for "ReplicationController"
I0813 19:57:33.474] no rollbacker has been implemented for "ReplicationController"
I0813 19:57:33.474] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 19:57:33.474] has:Object 'Kind' is missing
I0813 19:57:33.572] Successful
I0813 19:57:33.573] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 19:57:33.574] error: replicationcontrollers "busybox0" pausing is not supported
I0813 19:57:33.574] error: replicationcontrollers "busybox1" pausing is not supported
I0813 19:57:33.574] has:Object 'Kind' is missing
I0813 19:57:33.576] Successful
I0813 19:57:33.576] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 19:57:33.577] error: replicationcontrollers "busybox0" pausing is not supported
I0813 19:57:33.577] error: replicationcontrollers "busybox1" pausing is not supported
I0813 19:57:33.577] has:replicationcontrollers "busybox0" pausing is not supported
I0813 19:57:33.578] Successful
I0813 19:57:33.579] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 19:57:33.579] error: replicationcontrollers "busybox0" pausing is not supported
I0813 19:57:33.579] error: replicationcontrollers "busybox1" pausing is not supported
I0813 19:57:33.579] has:replicationcontrollers "busybox1" pausing is not supported
I0813 19:57:33.677] Successful
I0813 19:57:33.678] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 19:57:33.678] error: replicationcontrollers "busybox0" resuming is not supported
I0813 19:57:33.678] error: replicationcontrollers "busybox1" resuming is not supported
I0813 19:57:33.678] has:Object 'Kind' is missing
I0813 19:57:33.679] Successful
I0813 19:57:33.680] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 19:57:33.680] error: replicationcontrollers "busybox0" resuming is not supported
I0813 19:57:33.680] error: replicationcontrollers "busybox1" resuming is not supported
I0813 19:57:33.680] has:replicationcontrollers "busybox0" resuming is not supported
I0813 19:57:33.681] Successful
I0813 19:57:33.682] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 19:57:33.682] error: replicationcontrollers "busybox0" resuming is not supported
I0813 19:57:33.682] error: replicationcontrollers "busybox1" resuming is not supported
I0813 19:57:33.682] has:replicationcontrollers "busybox0" resuming is not supported
I0813 19:57:33.773] replicationcontroller "busybox0" force deleted
I0813 19:57:33.779] replicationcontroller "busybox1" force deleted
W0813 19:57:33.880] E0813 19:57:33.624752   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:33.881] E0813 19:57:33.739838   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:33.881] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0813 19:57:33.881] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0813 19:57:33.881] E0813 19:57:33.875411   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:33.987] E0813 19:57:33.986687   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:34.627] E0813 19:57:34.626477   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:34.742] E0813 19:57:34.741655   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:34.842] Recording: run_namespace_tests
I0813 19:57:34.843] Running command: run_namespace_tests
I0813 19:57:34.843] 
I0813 19:57:34.843] +++ Running case: test-cmd.run_namespace_tests 
I0813 19:57:34.843] +++ working dir: /go/src/k8s.io/kubernetes
I0813 19:57:34.843] +++ command: run_namespace_tests
I0813 19:57:34.843] +++ [0813 19:57:34] Testing kubectl(v1:namespaces)
I0813 19:57:34.915] namespace/my-namespace created
I0813 19:57:35.016] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0813 19:57:35.099] (Bnamespace "my-namespace" deleted
W0813 19:57:35.200] E0813 19:57:34.877071   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:35.201] E0813 19:57:34.988265   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:35.628] E0813 19:57:35.628133   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:35.744] E0813 19:57:35.743427   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:35.879] E0813 19:57:35.878910   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:35.990] E0813 19:57:35.990014   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:36.630] E0813 19:57:36.629891   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:36.745] E0813 19:57:36.745057   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:36.881] E0813 19:57:36.880464   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:36.992] E0813 19:57:36.991971   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:37.632] E0813 19:57:37.632021   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:37.747] E0813 19:57:37.746928   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:37.883] E0813 19:57:37.882215   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:37.994] E0813 19:57:37.993538   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:38.634] E0813 19:57:38.633773   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:38.749] E0813 19:57:38.748585   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:38.884] E0813 19:57:38.884155   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:38.995] E0813 19:57:38.995086   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:39.636] E0813 19:57:39.635520   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:39.751] E0813 19:57:39.750418   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:39.886] E0813 19:57:39.885782   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:39.997] E0813 19:57:39.996856   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:40.224] namespace/my-namespace condition met
I0813 19:57:40.323] Successful
I0813 19:57:40.324] message:Error from server (NotFound): namespaces "my-namespace" not found
I0813 19:57:40.324] has: not found
I0813 19:57:40.403] namespace/my-namespace created
I0813 19:57:40.507] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0813 19:57:40.764] (BSuccessful
I0813 19:57:40.765] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0813 19:57:40.765] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0813 19:57:40.770] namespace "namespace-1565726205-8798" deleted
I0813 19:57:40.770] namespace "namespace-1565726206-2572" deleted
I0813 19:57:40.770] namespace "namespace-1565726208-12560" deleted
I0813 19:57:40.770] namespace "namespace-1565726209-12547" deleted
I0813 19:57:40.771] namespace "namespace-1565726243-25114" deleted
I0813 19:57:40.771] namespace "namespace-1565726243-5778" deleted
I0813 19:57:40.771] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0813 19:57:40.771] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0813 19:57:40.772] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0813 19:57:40.772] has:warning: deleting cluster-scoped resources
I0813 19:57:40.772] Successful
I0813 19:57:40.772] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0813 19:57:40.772] namespace "kube-node-lease" deleted
I0813 19:57:40.773] namespace "my-namespace" deleted
I0813 19:57:40.773] namespace "namespace-1565726109-3434" deleted
... skipping 27 lines ...
I0813 19:57:40.778] namespace "namespace-1565726205-8798" deleted
I0813 19:57:40.778] namespace "namespace-1565726206-2572" deleted
I0813 19:57:40.778] namespace "namespace-1565726208-12560" deleted
I0813 19:57:40.779] namespace "namespace-1565726209-12547" deleted
I0813 19:57:40.779] namespace "namespace-1565726243-25114" deleted
I0813 19:57:40.779] namespace "namespace-1565726243-5778" deleted
I0813 19:57:40.779] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0813 19:57:40.779] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0813 19:57:40.780] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0813 19:57:40.780] has:namespace "my-namespace" deleted
W0813 19:57:40.881] E0813 19:57:40.636914   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:40.881] I0813 19:57:40.677632   53082 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0813 19:57:40.882] E0813 19:57:40.752087   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:40.882] I0813 19:57:40.757380   53082 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0813 19:57:40.882] I0813 19:57:40.777983   53082 controller_utils.go:1036] Caches are synced for garbage collector controller
W0813 19:57:40.882] I0813 19:57:40.858097   53082 controller_utils.go:1036] Caches are synced for resource quota controller
W0813 19:57:40.888] E0813 19:57:40.887359   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:40.988] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0813 19:57:40.989] (Bnamespace/other created
I0813 19:57:41.081] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0813 19:57:41.186] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:57:41.367] (Bpod/valid-pod created
W0813 19:57:41.468] E0813 19:57:40.998696   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:41.568] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0813 19:57:41.577] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0813 19:57:41.664] (BSuccessful
I0813 19:57:41.665] message:error: a resource cannot be retrieved by name across all namespaces
I0813 19:57:41.665] has:a resource cannot be retrieved by name across all namespaces
W0813 19:57:41.766] E0813 19:57:41.639019   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:41.767] E0813 19:57:41.753808   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:41.854] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0813 19:57:41.890] E0813 19:57:41.889877   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:41.991] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0813 19:57:41.992] (Bpod "valid-pod" force deleted
I0813 19:57:41.992] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:57:42.056] (Bnamespace "other" deleted
W0813 19:57:42.157] E0813 19:57:41.999992   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:42.641] E0813 19:57:42.641136   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:42.757] E0813 19:57:42.756490   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:42.892] E0813 19:57:42.891776   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:43.002] E0813 19:57:43.001789   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:43.643] E0813 19:57:43.642941   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:43.654] I0813 19:57:43.654149   53082 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1565726243-5778
W0813 19:57:43.661] I0813 19:57:43.660448   53082 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1565726243-5778
W0813 19:57:43.759] E0813 19:57:43.758276   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:43.894] E0813 19:57:43.893437   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:44.004] E0813 19:57:44.003740   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:44.645] E0813 19:57:44.644927   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:44.760] E0813 19:57:44.759930   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:44.896] E0813 19:57:44.895248   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:45.006] E0813 19:57:45.005745   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:45.648] E0813 19:57:45.647901   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:45.762] E0813 19:57:45.761403   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:45.901] E0813 19:57:45.900509   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:46.010] E0813 19:57:46.009291   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:46.652] E0813 19:57:46.651431   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:46.764] E0813 19:57:46.763542   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:46.903] E0813 19:57:46.903101   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:47.011] E0813 19:57:47.011037   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:47.194] +++ exit code: 0
I0813 19:57:47.236] Recording: run_secrets_test
I0813 19:57:47.237] Running command: run_secrets_test
I0813 19:57:47.262] 
I0813 19:57:47.264] +++ Running case: test-cmd.run_secrets_test 
I0813 19:57:47.267] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 44 lines ...
I0813 19:57:47.871] (Bcore.sh:733: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:57:47.947] (Bsecret/test-secret created
I0813 19:57:48.045] core.sh:737: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0813 19:57:48.138] (Bcore.sh:738: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
I0813 19:57:48.300] (Bsecret "test-secret" deleted
W0813 19:57:48.401] I0813 19:57:47.518832   70066 loader.go:375] Config loaded from file:  /tmp/tmp.7n2JYRfix6/.kube/config
W0813 19:57:48.403] E0813 19:57:47.652834   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:48.404] E0813 19:57:47.764921   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:48.405] E0813 19:57:47.904582   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:48.406] E0813 19:57:48.012525   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:48.507] core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:57:48.602] (Bsecret/test-secret created
W0813 19:57:48.705] E0813 19:57:48.655228   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:48.767] E0813 19:57:48.766756   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:48.869] core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0813 19:57:48.976] (Bcore.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
I0813 19:57:49.308] (Bsecret "test-secret" deleted
W0813 19:57:49.409] E0813 19:57:48.907794   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:49.410] E0813 19:57:49.014834   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:49.511] core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:57:49.641] (Bsecret/test-secret created
W0813 19:57:49.742] E0813 19:57:49.657988   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:49.771] E0813 19:57:49.769834   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:49.872] core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0813 19:57:50.039] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0813 19:57:50.219] (Bsecret "test-secret" deleted
W0813 19:57:50.320] E0813 19:57:49.910467   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:50.322] E0813 19:57:50.016894   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:50.423] secret/test-secret created
I0813 19:57:50.594] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0813 19:57:50.772] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0813 19:57:50.850] (Bsecret "test-secret" deleted
W0813 19:57:50.951] E0813 19:57:50.661129   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:50.952] E0813 19:57:50.771276   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:50.952] E0813 19:57:50.911958   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:51.019] E0813 19:57:51.018494   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:51.119] secret/secret-string-data created
I0813 19:57:51.120] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0813 19:57:51.174] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0813 19:57:51.263] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0813 19:57:51.338] (Bsecret "secret-string-data" deleted
I0813 19:57:51.429] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:57:51.591] (Bsecret "test-secret" deleted
I0813 19:57:51.670] namespace "test-secrets" deleted
W0813 19:57:51.771] E0813 19:57:51.662977   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:51.773] E0813 19:57:51.772903   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:51.914] E0813 19:57:51.914063   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:52.021] E0813 19:57:52.020317   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:52.665] E0813 19:57:52.664891   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:52.775] E0813 19:57:52.774476   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:52.916] E0813 19:57:52.915575   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:53.022] E0813 19:57:53.021685   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:53.667] E0813 19:57:53.666377   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:53.776] E0813 19:57:53.775984   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:53.917] E0813 19:57:53.917056   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:54.024] E0813 19:57:54.023383   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:54.668] E0813 19:57:54.667781   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:54.778] E0813 19:57:54.777466   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:54.919] E0813 19:57:54.918644   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:55.025] E0813 19:57:55.024730   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:55.669] E0813 19:57:55.669241   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:55.779] E0813 19:57:55.779161   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:55.920] E0813 19:57:55.920216   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:56.027] E0813 19:57:56.026387   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:56.671] E0813 19:57:56.670549   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:56.771] +++ exit code: 0
I0813 19:57:56.802] Recording: run_configmap_tests
I0813 19:57:56.802] Running command: run_configmap_tests
I0813 19:57:56.820] 
I0813 19:57:56.822] +++ Running case: test-cmd.run_configmap_tests 
I0813 19:57:56.825] +++ working dir: /go/src/k8s.io/kubernetes
I0813 19:57:56.826] +++ command: run_configmap_tests
I0813 19:57:56.838] +++ [0813 19:57:56] Creating namespace namespace-1565726276-3431
I0813 19:57:56.910] namespace/namespace-1565726276-3431 created
I0813 19:57:56.977] Context "test" modified.
I0813 19:57:56.983] +++ [0813 19:57:56] Testing configmaps
W0813 19:57:57.084] E0813 19:57:56.780844   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:57.084] E0813 19:57:56.921385   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:57.084] E0813 19:57:57.027635   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:57:57.185] configmap/test-configmap created
I0813 19:57:57.260] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0813 19:57:57.336] (Bconfigmap "test-configmap" deleted
I0813 19:57:57.432] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I0813 19:57:57.503] (Bnamespace/test-configmaps created
I0813 19:57:57.590] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
... skipping 3 lines ...
I0813 19:57:57.913] configmap/test-binary-configmap created
I0813 19:57:58.003] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0813 19:57:58.095] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0813 19:57:58.335] (Bconfigmap "test-configmap" deleted
I0813 19:57:58.418] configmap "test-binary-configmap" deleted
I0813 19:57:58.493] namespace "test-configmaps" deleted
W0813 19:57:58.594] E0813 19:57:57.671711   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:58.595] E0813 19:57:57.782136   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:58.595] E0813 19:57:57.922720   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:58.595] E0813 19:57:58.029462   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:58.673] E0813 19:57:58.673255   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:58.784] E0813 19:57:58.783761   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:58.925] E0813 19:57:58.924428   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:59.031] E0813 19:57:59.030912   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:59.675] E0813 19:57:59.675169   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:59.786] E0813 19:57:59.785461   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:57:59.926] E0813 19:57:59.926053   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:00.033] E0813 19:58:00.032571   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:00.677] E0813 19:58:00.677175   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:00.787] E0813 19:58:00.787195   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:00.928] E0813 19:58:00.927782   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:01.035] E0813 19:58:01.034192   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:01.679] E0813 19:58:01.679062   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:01.789] E0813 19:58:01.788882   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:01.930] E0813 19:58:01.929516   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:02.036] E0813 19:58:02.036000   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:02.681] E0813 19:58:02.680836   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:02.791] E0813 19:58:02.790579   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:02.932] E0813 19:58:02.931270   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:03.038] E0813 19:58:03.037711   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:03.634] +++ exit code: 0
I0813 19:58:03.675] Recording: run_client_config_tests
I0813 19:58:03.676] Running command: run_client_config_tests
I0813 19:58:03.700] 
I0813 19:58:03.702] +++ Running case: test-cmd.run_client_config_tests 
I0813 19:58:03.706] +++ working dir: /go/src/k8s.io/kubernetes
I0813 19:58:03.708] +++ command: run_client_config_tests
I0813 19:58:03.723] +++ [0813 19:58:03] Creating namespace namespace-1565726283-16141
I0813 19:58:03.808] namespace/namespace-1565726283-16141 created
I0813 19:58:03.883] Context "test" modified.
I0813 19:58:03.891] +++ [0813 19:58:03] Testing client config
I0813 19:58:03.964] Successful
I0813 19:58:03.965] message:error: stat missing: no such file or directory
I0813 19:58:03.966] has:missing: no such file or directory
I0813 19:58:04.039] Successful
I0813 19:58:04.040] message:error: stat missing: no such file or directory
I0813 19:58:04.041] has:missing: no such file or directory
I0813 19:58:04.120] Successful
I0813 19:58:04.121] message:error: stat missing: no such file or directory
I0813 19:58:04.121] has:missing: no such file or directory
I0813 19:58:04.198] Successful
I0813 19:58:04.199] message:Error in configuration: context was not found for specified context: missing-context
I0813 19:58:04.199] has:context was not found for specified context: missing-context
I0813 19:58:04.277] Successful
I0813 19:58:04.278] message:error: no server found for cluster "missing-cluster"
I0813 19:58:04.278] has:no server found for cluster "missing-cluster"
I0813 19:58:04.356] Successful
I0813 19:58:04.357] message:error: auth info "missing-user" does not exist
I0813 19:58:04.357] has:auth info "missing-user" does not exist
W0813 19:58:04.458] E0813 19:58:03.682563   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:04.459] E0813 19:58:03.792617   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:04.460] E0813 19:58:03.932895   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:04.460] E0813 19:58:04.039325   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:04.561] Successful
I0813 19:58:04.561] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0813 19:58:04.561] has:error loading config file
I0813 19:58:04.587] Successful
I0813 19:58:04.587] message:error: stat missing-config: no such file or directory
I0813 19:58:04.587] has:no such file or directory
I0813 19:58:04.601] +++ exit code: 0
I0813 19:58:04.640] Recording: run_service_accounts_tests
I0813 19:58:04.640] Running command: run_service_accounts_tests
I0813 19:58:04.660] 
I0813 19:58:04.662] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0813 19:58:05.020] (Bnamespace/test-service-accounts created
I0813 19:58:05.115] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0813 19:58:05.196] (Bserviceaccount/test-service-account created
I0813 19:58:05.295] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0813 19:58:05.379] (Bserviceaccount "test-service-account" deleted
I0813 19:58:05.467] namespace "test-service-accounts" deleted
W0813 19:58:05.569] E0813 19:58:04.684571   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:05.569] E0813 19:58:04.794216   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:05.570] E0813 19:58:04.934321   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:05.570] E0813 19:58:05.041072   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:05.687] E0813 19:58:05.687090   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:05.797] E0813 19:58:05.796353   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:05.937] E0813 19:58:05.936250   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:06.043] E0813 19:58:06.043024   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:06.689] E0813 19:58:06.688891   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:06.799] E0813 19:58:06.798093   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:06.938] E0813 19:58:06.938040   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:07.046] E0813 19:58:07.045544   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:07.691] E0813 19:58:07.690546   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:07.800] E0813 19:58:07.799798   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:07.940] E0813 19:58:07.939755   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:08.048] E0813 19:58:08.047262   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:08.692] E0813 19:58:08.691867   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:08.802] E0813 19:58:08.801369   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:08.942] E0813 19:58:08.941577   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:09.050] E0813 19:58:09.049167   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:09.695] E0813 19:58:09.694755   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:09.804] E0813 19:58:09.803446   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:09.944] E0813 19:58:09.943724   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:10.051] E0813 19:58:10.050838   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:10.591] +++ exit code: 0
I0813 19:58:10.638] Recording: run_job_tests
I0813 19:58:10.638] Running command: run_job_tests
I0813 19:58:10.662] 
I0813 19:58:10.665] +++ Running case: test-cmd.run_job_tests 
I0813 19:58:10.667] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0813 19:58:11.496] Labels:                        run=pi
I0813 19:58:11.497] Annotations:                   <none>
I0813 19:58:11.497] Schedule:                      59 23 31 2 *
I0813 19:58:11.497] Concurrency Policy:            Allow
I0813 19:58:11.497] Suspend:                       False
I0813 19:58:11.497] Successful Job History Limit:  3
I0813 19:58:11.497] Failed Job History Limit:      1
I0813 19:58:11.498] Starting Deadline Seconds:     <unset>
I0813 19:58:11.498] Selector:                      <unset>
I0813 19:58:11.498] Parallelism:                   <unset>
I0813 19:58:11.498] Completions:                   <unset>
I0813 19:58:11.498] Pod Template:
I0813 19:58:11.498]   Labels:  run=pi
... skipping 32 lines ...
I0813 19:58:12.079]                 run=pi
I0813 19:58:12.079] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0813 19:58:12.080] Controlled By:  CronJob/pi
I0813 19:58:12.080] Parallelism:    1
I0813 19:58:12.080] Completions:    1
I0813 19:58:12.081] Start Time:     Tue, 13 Aug 2019 19:58:11 +0000
I0813 19:58:12.081] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0813 19:58:12.081] Pod Template:
I0813 19:58:12.082]   Labels:  controller-uid=1ef24e12-9f51-483a-a858-08bd849faf1d
I0813 19:58:12.082]            job-name=test-job
I0813 19:58:12.082]            run=pi
I0813 19:58:12.083]   Containers:
I0813 19:58:12.083]    pi:
... skipping 15 lines ...
I0813 19:58:12.086]   Type    Reason            Age   From            Message
I0813 19:58:12.086]   ----    ------            ----  ----            -------
I0813 19:58:12.086]   Normal  SuccessfulCreate  1s    job-controller  Created pod: test-job-7rcv4
I0813 19:58:12.163] job.batch "test-job" deleted
I0813 19:58:12.252] cronjob.batch "pi" deleted
I0813 19:58:12.342] namespace "test-jobs" deleted
W0813 19:58:12.443] E0813 19:58:10.696488   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:12.444] E0813 19:58:10.805187   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:12.444] E0813 19:58:10.945333   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:12.444] E0813 19:58:11.052341   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:12.444] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0813 19:58:12.444] E0813 19:58:11.698266   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:12.445] I0813 19:58:11.785124   53082 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"1ef24e12-9f51-483a-a858-08bd849faf1d", APIVersion:"batch/v1", ResourceVersion:"1347", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-7rcv4
W0813 19:58:12.445] E0813 19:58:11.806797   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:12.445] E0813 19:58:11.946816   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:12.445] E0813 19:58:12.053979   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:12.700] E0813 19:58:12.700050   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:12.809] E0813 19:58:12.808930   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:12.949] E0813 19:58:12.948518   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:13.056] E0813 19:58:13.055625   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:13.702] E0813 19:58:13.701757   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:13.812] E0813 19:58:13.811901   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:13.951] E0813 19:58:13.950139   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:14.058] E0813 19:58:14.057362   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:14.704] E0813 19:58:14.703371   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:14.814] E0813 19:58:14.813436   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:14.952] E0813 19:58:14.951766   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:15.060] E0813 19:58:15.059268   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:15.705] E0813 19:58:15.705210   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:15.816] E0813 19:58:15.815304   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:15.954] E0813 19:58:15.953505   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:16.061] E0813 19:58:16.061023   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:16.707] E0813 19:58:16.707052   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:16.817] E0813 19:58:16.817068   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:16.955] E0813 19:58:16.955186   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:17.063] E0813 19:58:17.062891   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:17.468] +++ exit code: 0
I0813 19:58:17.504] Recording: run_create_job_tests
I0813 19:58:17.504] Running command: run_create_job_tests
I0813 19:58:17.524] 
I0813 19:58:17.526] +++ Running case: test-cmd.run_create_job_tests 
I0813 19:58:17.528] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 24 lines ...
I0813 19:58:18.834] +++ [0813 19:58:18] Creating namespace namespace-1565726298-30305
I0813 19:58:18.917] namespace/namespace-1565726298-30305 created
I0813 19:58:18.993] Context "test" modified.
I0813 19:58:19.000] +++ [0813 19:58:18] Testing pod templates
I0813 19:58:19.103] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:58:19.289] (Bpodtemplate/nginx created
W0813 19:58:19.390] E0813 19:58:17.708723   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:19.391] I0813 19:58:17.788926   53082 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565726297-4202", Name:"test-job", UID:"abf30a2b-9a32-4e92-b57c-dc19b718be65", APIVersion:"batch/v1", ResourceVersion:"1364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-ks45d
W0813 19:58:19.391] E0813 19:58:17.818705   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:19.392] E0813 19:58:17.956888   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:19.392] E0813 19:58:18.065706   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:19.393] I0813 19:58:18.067216   53082 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565726297-4202", Name:"test-job-pi", UID:"0dd2ef09-4c1d-4aa4-b42a-dcd67d0b5608", APIVersion:"batch/v1", ResourceVersion:"1371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-b8gfl
W0813 19:58:19.393] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0813 19:58:19.394] I0813 19:58:18.458981   53082 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565726297-4202", Name:"my-pi", UID:"b7b44875-b7d7-4337-a479-59e6bb974654", APIVersion:"batch/v1", ResourceVersion:"1380", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-lv6x8
W0813 19:58:19.394] E0813 19:58:18.710156   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:19.395] E0813 19:58:18.820245   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:19.395] E0813 19:58:18.958788   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:19.395] E0813 19:58:19.068103   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:19.396] I0813 19:58:19.286328   49606 controller.go:606] quota admission added evaluator for: podtemplates
I0813 19:58:19.496] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0813 19:58:19.497] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0813 19:58:19.497] nginx   nginx        nginx    name=nginx
I0813 19:58:19.680] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0813 19:58:19.770] (Bpodtemplate "nginx" deleted
W0813 19:58:19.871] E0813 19:58:19.711876   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:19.871] E0813 19:58:19.821782   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:19.961] E0813 19:58:19.960498   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:20.062] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:58:20.062] (B+++ exit code: 0
I0813 19:58:20.062] Recording: run_service_tests
I0813 19:58:20.062] Running command: run_service_tests
I0813 19:58:20.062] 
I0813 19:58:20.062] +++ Running case: test-cmd.run_service_tests 
I0813 19:58:20.063] +++ working dir: /go/src/k8s.io/kubernetes
I0813 19:58:20.063] +++ command: run_service_tests
I0813 19:58:20.063] Context "test" modified.
I0813 19:58:20.063] +++ [0813 19:58:20] Testing kubectl(v1:services)
I0813 19:58:20.157] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0813 19:58:20.335] (Bservice/redis-master created
W0813 19:58:20.436] E0813 19:58:20.069751   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:20.536] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0813 19:58:20.584] (Bcore.sh:864: Successful describe services redis-master:
I0813 19:58:20.584] Name:              redis-master
I0813 19:58:20.584] Namespace:         default
I0813 19:58:20.585] Labels:            app=redis
I0813 19:58:20.585]                    role=master
... skipping 35 lines ...
I0813 19:58:20.803] IP:                10.0.0.227
I0813 19:58:20.803] Port:              <unset>  6379/TCP
I0813 19:58:20.803] TargetPort:        6379/TCP
I0813 19:58:20.804] Endpoints:         <none>
I0813 19:58:20.804] Session Affinity:  None
I0813 19:58:20.804] (B
W0813 19:58:20.904] E0813 19:58:20.713517   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:20.905] E0813 19:58:20.823170   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:20.962] E0813 19:58:20.962173   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:21.063] core.sh:870: Successful describe
I0813 19:58:21.064] Name:              redis-master
I0813 19:58:21.064] Namespace:         default
I0813 19:58:21.064] Labels:            app=redis
I0813 19:58:21.064]                    role=master
I0813 19:58:21.064]                    tier=backend
... skipping 254 lines ...
I0813 19:58:22.147]   selector:
I0813 19:58:22.148]     role: padawan
I0813 19:58:22.148]   sessionAffinity: None
I0813 19:58:22.148]   type: ClusterIP
I0813 19:58:22.148] status:
I0813 19:58:22.148]   loadBalancer: {}
W0813 19:58:22.248] E0813 19:58:21.071230   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:22.249] E0813 19:58:21.715215   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:22.249] E0813 19:58:21.825174   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:22.249] E0813 19:58:21.963795   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:22.250] E0813 19:58:22.072998   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:22.250] error: you must specify resources by --filename when --local is set.
W0813 19:58:22.250] Example resource specifications include:
W0813 19:58:22.250]    '-f rsrc.yaml'
W0813 19:58:22.250]    '--filename=rsrc.json'
I0813 19:58:22.350] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0813 19:58:22.510] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0813 19:58:22.598] (Bservice "redis-master" deleted
I0813 19:58:22.698] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0813 19:58:22.795] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0813 19:58:22.972] (Bservice/redis-master created
I0813 19:58:23.077] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0813 19:58:23.173] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0813 19:58:23.336] (Bservice/service-v1-test created
W0813 19:58:23.437] E0813 19:58:22.717067   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:23.438] E0813 19:58:22.827270   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:23.439] E0813 19:58:22.965331   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:23.440] E0813 19:58:23.074855   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:23.540] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0813 19:58:23.632] (Bservice/service-v1-test replaced
I0813 19:58:23.733] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0813 19:58:23.825] (Bservice "redis-master" deleted
I0813 19:58:23.921] service "service-v1-test" deleted
I0813 19:58:24.026] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0813 19:58:24.130] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0813 19:58:24.307] (Bservice/redis-master created
W0813 19:58:24.408] E0813 19:58:23.719418   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:24.408] E0813 19:58:23.828497   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:24.408] E0813 19:58:23.966964   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:24.409] E0813 19:58:24.076734   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:24.509] service/redis-slave created
I0813 19:58:24.594] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0813 19:58:24.690] (BSuccessful
I0813 19:58:24.691] message:NAME           RSRC
I0813 19:58:24.691] kubernetes     144
I0813 19:58:24.691] redis-master   1415
I0813 19:58:24.692] redis-slave    1418
I0813 19:58:24.692] has:redis-master
I0813 19:58:24.789] core.sh:979: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0813 19:58:24.881] (Bservice "redis-master" deleted
I0813 19:58:24.890] service "redis-slave" deleted
W0813 19:58:24.991] E0813 19:58:24.720947   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:24.991] E0813 19:58:24.830082   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:24.992] E0813 19:58:24.968642   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:25.079] E0813 19:58:25.078527   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:25.180] core.sh:986: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0813 19:58:25.180] (Bcore.sh:990: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0813 19:58:25.191] (Bservice/beep-boop created
I0813 19:58:25.294] core.sh:994: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I0813 19:58:25.390] (Bcore.sh:998: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I0813 19:58:25.479] (Bservice "beep-boop" deleted
I0813 19:58:25.589] core.sh:1005: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0813 19:58:25.683] (Bcore.sh:1009: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:58:25.791] (Bservice/testmetadata created
I0813 19:58:25.792] deployment.apps/testmetadata created
W0813 19:58:25.893] E0813 19:58:25.722565   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:25.894] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0813 19:58:25.895] I0813 19:58:25.774627   53082 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"1f07d10b-01d4-4b86-bf40-a5d31dd3f342", APIVersion:"apps/v1", ResourceVersion:"1432", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-6cdd84c77d to 2
W0813 19:58:25.895] I0813 19:58:25.782263   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"9260911e-6645-49b9-9f7c-a69ad97efd24", APIVersion:"apps/v1", ResourceVersion:"1433", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-2sfw5
W0813 19:58:25.896] I0813 19:58:25.787495   53082 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"9260911e-6645-49b9-9f7c-a69ad97efd24", APIVersion:"apps/v1", ResourceVersion:"1433", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-nzj9n
W0813 19:58:25.896] E0813 19:58:25.832185   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:25.971] E0813 19:58:25.970293   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:26.072] core.sh:1013: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
I0813 19:58:26.072] (Bcore.sh:1014: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
I0813 19:58:26.098] (Bservice/exposemetadata exposed
I0813 19:58:26.201] core.sh:1020: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
I0813 19:58:26.290] (Bservice "exposemetadata" deleted
I0813 19:58:26.301] service "testmetadata" deleted
... skipping 8 lines ...
I0813 19:58:26.517] +++ [0813 19:58:26] Creating namespace namespace-1565726306-23770
I0813 19:58:26.596] namespace/namespace-1565726306-23770 created
I0813 19:58:26.673] Context "test" modified.
I0813 19:58:26.680] +++ [0813 19:58:26] Testing kubectl(v1:daemonsets)
I0813 19:58:26.778] apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:58:26.980] (Bdaemonset.apps/bind created
W0813 19:58:27.081] E0813 19:58:26.080433   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:27.082] E0813 19:58:26.724422   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:27.082] E0813 19:58:26.833777   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:27.082] E0813 19:58:26.971875   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:27.082] I0813 19:58:26.976744   49606 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0813 19:58:27.083] I0813 19:58:26.989981   49606 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0813 19:58:27.084] E0813 19:58:27.083398   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:27.184] apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
I0813 19:58:27.268] (Bdaemonset.apps/bind configured
I0813 19:58:27.372] apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
I0813 19:58:27.474] (Bdaemonset.apps/bind image updated
I0813 19:58:27.581] apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
I0813 19:58:27.679] (Bdaemonset.apps/bind env updated
W0813 19:58:27.781] E0813 19:58:27.726205   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:27.836] E0813 19:58:27.835498   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:27.937] apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
I0813 19:58:27.938] (Bdaemonset.apps/bind resource requirements updated
I0813 19:58:28.003] apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
I0813 19:58:28.106] (Bdaemonset.apps/bind restarted
I0813 19:58:28.211] apps.sh:48: Successful get daemonsets bind {{.metadata.generation}}: 5
I0813 19:58:28.294] (Bdaemonset.apps "bind" deleted
... skipping 7 lines ...
I0813 19:58:28.395] +++ [0813 19:58:28] Creating namespace namespace-1565726308-7122
I0813 19:58:28.475] namespace/namespace-1565726308-7122 created
I0813 19:58:28.554] Context "test" modified.
I0813 19:58:28.563] +++ [0813 19:58:28] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
I0813 19:58:28.661] apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 19:58:28.853] (Bdaemonset.apps/bind created
W0813 19:58:28.954] E0813 19:58:27.973497   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:28.955] E0813 19:58:28.085255   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:28.955] E0813 19:58:28.728118   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:28.956] E0813 19:58:28.837319   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 19:58:28.976] E0813 19:58:28.975282   53082 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 19:58:29.077] apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/la