This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2469 succeeded
Started2019-08-13 02:27
Elapsed26m52s
Revision
Buildergke-prow-ssd-pool-1a225945-cx5g
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/7eeec89b-7ce0-4bdc-aa1e-313bcc1ec18a/targets/test'}}
podcc3f6e6b-bd71-11e9-a0ae-ea43db2f3479
resultstorehttps://source.cloud.google.com/results/invocations/7eeec89b-7ce0-4bdc-aa1e-313bcc1ec18a/targets/test
infra-commit233482656
podcc3f6e6b-bd71-11e9-a0ae-ea43db2f3479
repok8s.io/kubernetes
repo-commit890b50f98bc981275993b5fdc9b5d450364b2a42
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptWithPermitPlugin 1m4s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptWithPermitPlugin$
=== RUN   TestPreemptWithPermitPlugin
I0813 02:49:37.565490  110787 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0813 02:49:37.565514  110787 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0813 02:49:37.565526  110787 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0813 02:49:37.565536  110787 master.go:234] Using reconciler: 
I0813 02:49:37.567292  110787 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.567396  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.567409  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.567449  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.567513  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.567902  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.567950  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.568079  110787 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0813 02:49:37.568114  110787 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.568307  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.568328  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.568363  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.568385  110787 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0813 02:49:37.568420  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.568747  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.568783  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.568883  110787 store.go:1342] Monitoring events count at <storage-prefix>//events
I0813 02:49:37.568915  110787 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.568974  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.568987  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.569023  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.569025  110787 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0813 02:49:37.569067  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.569334  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.569387  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.569451  110787 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0813 02:49:37.569479  110787 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.569537  110787 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0813 02:49:37.569540  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.569559  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.569616  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.569654  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.570377  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.570438  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.570472  110787 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0813 02:49:37.570560  110787 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0813 02:49:37.570653  110787 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.570886  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.570907  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.570939  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.570998  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.571429  110787 watch_cache.go:405] Replace watchCache (rev: 29196) 
I0813 02:49:37.571834  110787 watch_cache.go:405] Replace watchCache (rev: 29196) 
I0813 02:49:37.571896  110787 watch_cache.go:405] Replace watchCache (rev: 29196) 
I0813 02:49:37.572243  110787 watch_cache.go:405] Replace watchCache (rev: 29196) 
I0813 02:49:37.572699  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.572911  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.573849  110787 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0813 02:49:37.573279  110787 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0813 02:49:37.574577  110787 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.574898  110787 watch_cache.go:405] Replace watchCache (rev: 29196) 
I0813 02:49:37.575101  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.575512  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.575847  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.576240  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.576818  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.576986  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.577240  110787 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0813 02:49:37.577364  110787 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0813 02:49:37.577509  110787 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.577723  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.577844  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.578076  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.578256  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.580222  110787 watch_cache.go:405] Replace watchCache (rev: 29197) 
I0813 02:49:37.581836  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.581998  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.582314  110787 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0813 02:49:37.582360  110787 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0813 02:49:37.582616  110787 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.582902  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.582914  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.583126  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.583173  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.583916  110787 watch_cache.go:405] Replace watchCache (rev: 29198) 
I0813 02:49:37.584045  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.584095  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.584133  110787 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0813 02:49:37.584254  110787 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.584308  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.584317  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.584345  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.584386  110787 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0813 02:49:37.584526  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.585169  110787 watch_cache.go:405] Replace watchCache (rev: 29198) 
I0813 02:49:37.585374  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.585455  110787 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0813 02:49:37.585579  110787 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.585662  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.585671  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.585697  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.585729  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.585774  110787 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0813 02:49:37.585911  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.586130  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.586181  110787 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0813 02:49:37.586261  110787 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.586300  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.586307  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.586324  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.586348  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.586366  110787 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0813 02:49:37.586487  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.586747  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.586819  110787 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0813 02:49:37.586914  110787 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.586974  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.586982  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.587000  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.587031  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.587055  110787 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0813 02:49:37.587152  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.587449  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.587550  110787 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0813 02:49:37.587686  110787 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.587740  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.587747  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.587769  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.587795  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.587819  110787 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0813 02:49:37.587940  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.588257  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.588328  110787 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0813 02:49:37.588422  110787 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.588484  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.588494  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.588533  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.588611  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.588656  110787 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0813 02:49:37.588835  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.589224  110787 watch_cache.go:405] Replace watchCache (rev: 29198) 
I0813 02:49:37.589523  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.589568  110787 watch_cache.go:405] Replace watchCache (rev: 29198) 
I0813 02:49:37.589637  110787 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0813 02:49:37.589654  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.589524  110787 watch_cache.go:405] Replace watchCache (rev: 29198) 
I0813 02:49:37.589663  110787 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.589746  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.589756  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.589783  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.589827  110787 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0813 02:49:37.590268  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.590667  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.590730  110787 watch_cache.go:405] Replace watchCache (rev: 29198) 
I0813 02:49:37.590761  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.590772  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.590784  110787 watch_cache.go:405] Replace watchCache (rev: 29198) 
I0813 02:49:37.590799  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.590843  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.590884  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.591372  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.591521  110787 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.591556  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.591604  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.591643  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.591679  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.591757  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.591772  110787 watch_cache.go:405] Replace watchCache (rev: 29198) 
I0813 02:49:37.592171  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.592221  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.592465  110787 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0813 02:49:37.592658  110787 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0813 02:49:37.593088  110787 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.593251  110787 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.593392  110787 watch_cache.go:405] Replace watchCache (rev: 29199) 
I0813 02:49:37.593927  110787 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.594403  110787 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.594956  110787 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.595537  110787 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.595998  110787 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.596111  110787 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.596350  110787 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.597124  110787 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.598080  110787 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.598377  110787 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.599379  110787 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.599974  110787 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.601291  110787 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.601698  110787 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.602345  110787 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.602667  110787 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.603044  110787 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.603392  110787 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.603832  110787 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.604134  110787 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.604478  110787 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.605344  110787 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.605544  110787 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.606261  110787 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.606886  110787 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.607225  110787 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.607548  110787 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.608675  110787 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.609217  110787 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.612738  110787 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.613340  110787 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.615161  110787 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.616025  110787 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.616310  110787 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.616487  110787 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0813 02:49:37.616596  110787 master.go:434] Enabling API group "authentication.k8s.io".
I0813 02:49:37.616666  110787 master.go:434] Enabling API group "authorization.k8s.io".
I0813 02:49:37.616840  110787 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.616944  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.616980  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.617031  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.617128  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.617566  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.617689  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.617764  110787 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0813 02:49:37.617908  110787 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0813 02:49:37.617929  110787 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.618024  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.618037  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.618073  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.618140  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.618469  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.618541  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.618635  110787 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0813 02:49:37.618781  110787 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.618862  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.618875  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.618908  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.618959  110787 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0813 02:49:37.619217  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.619538  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.619576  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.619776  110787 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0813 02:49:37.619801  110787 master.go:434] Enabling API group "autoscaling".
I0813 02:49:37.619927  110787 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.619987  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.619998  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.620037  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.620045  110787 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0813 02:49:37.620169  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.620273  110787 watch_cache.go:405] Replace watchCache (rev: 29205) 
I0813 02:49:37.621279  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.621371  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.621392  110787 watch_cache.go:405] Replace watchCache (rev: 29205) 
I0813 02:49:37.621424  110787 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0813 02:49:37.621477  110787 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0813 02:49:37.621647  110787 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.621771  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.621832  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.621869  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.621964  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.623059  110787 watch_cache.go:405] Replace watchCache (rev: 29206) 
I0813 02:49:37.623136  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.623244  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.623282  110787 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0813 02:49:37.623300  110787 master.go:434] Enabling API group "batch".
I0813 02:49:37.623351  110787 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0813 02:49:37.623369  110787 watch_cache.go:405] Replace watchCache (rev: 29205) 
I0813 02:49:37.623436  110787 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.623507  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.623618  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.623661  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.623700  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.624930  110787 watch_cache.go:405] Replace watchCache (rev: 29206) 
I0813 02:49:37.624996  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.625036  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.625086  110787 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0813 02:49:37.625112  110787 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0813 02:49:37.625136  110787 master.go:434] Enabling API group "certificates.k8s.io".
I0813 02:49:37.625323  110787 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.625429  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.625443  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.625494  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.625574  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.626128  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.626152  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.626170  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.626226  110787 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0813 02:49:37.626279  110787 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0813 02:49:37.626349  110787 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.626417  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.626427  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.626455  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.626496  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.626975  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.627060  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.627066  110787 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0813 02:49:37.627082  110787 master.go:434] Enabling API group "coordination.k8s.io".
I0813 02:49:37.627108  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.627208  110787 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.627289  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.627299  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.627332  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.627354  110787 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0813 02:49:37.627404  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.627687  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.627725  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.627795  110787 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0813 02:49:37.627816  110787 master.go:434] Enabling API group "extensions".
I0813 02:49:37.627945  110787 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.628045  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.628057  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.628087  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.628134  110787 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0813 02:49:37.628157  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.628308  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.628666  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.628702  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.628788  110787 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0813 02:49:37.628824  110787 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0813 02:49:37.628920  110787 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.628985  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.628996  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.629037  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.629113  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.629338  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.629435  110787 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0813 02:49:37.629455  110787 master.go:434] Enabling API group "networking.k8s.io".
I0813 02:49:37.629484  110787 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.629543  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.629554  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.629637  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.629687  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.629716  110787 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0813 02:49:37.629909  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.630187  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.630225  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.630303  110787 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0813 02:49:37.630321  110787 master.go:434] Enabling API group "node.k8s.io".
I0813 02:49:37.630360  110787 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0813 02:49:37.630459  110787 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.630528  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.630538  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.630571  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.630639  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.632204  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.632292  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.632308  110787 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0813 02:49:37.632309  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.632338  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.632433  110787 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.632522  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.632534  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.632561  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.632563  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.632612  110787 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0813 02:49:37.632713  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.632727  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.632986  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.633062  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.633094  110787 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0813 02:49:37.633105  110787 master.go:434] Enabling API group "policy".
I0813 02:49:37.633132  110787 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.633155  110787 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0813 02:49:37.633183  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.633192  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.633220  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.633326  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.633567  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.633714  110787 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0813 02:49:37.633776  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.633802  110787 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0813 02:49:37.633837  110787 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.633898  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.633921  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.633953  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.634012  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.634239  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.634261  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.634432  110787 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0813 02:49:37.634491  110787 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0813 02:49:37.634677  110787 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.634746  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.634756  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.634788  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.634838  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.635058  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.635142  110787 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0813 02:49:37.635165  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.635202  110787 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0813 02:49:37.635259  110787 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.635312  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.635321  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.635347  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.635386  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.635637  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.635721  110787 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0813 02:49:37.635755  110787 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.635807  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.635822  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.635847  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.635889  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.635913  110787 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0813 02:49:37.636099  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.636367  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.636453  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.636459  110787 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0813 02:49:37.636479  110787 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0813 02:49:37.636598  110787 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.636647  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.636654  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.636673  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.636702  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.636857  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.636879  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.636924  110787 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0813 02:49:37.636954  110787 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.636995  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.637003  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.637026  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.637040  110787 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0813 02:49:37.637154  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.638063  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.638150  110787 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0813 02:49:37.638232  110787 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.638274  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.638280  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.638303  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.638340  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.638361  110787 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0813 02:49:37.638514  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.640772  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.640854  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.641230  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.641253  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.641345  110787 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0813 02:49:37.641365  110787 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0813 02:49:37.641525  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.641578  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.641986  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.642050  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.642386  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.642496  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.642709  110787 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0813 02:49:37.643195  110787 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.643266  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.643275  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.643306  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.643425  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.643669  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.643755  110787 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0813 02:49:37.643862  110787 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.643913  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.643922  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.643947  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.643982  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.644007  110787 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0813 02:49:37.644179  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.644383  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.644445  110787 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0813 02:49:37.644455  110787 master.go:434] Enabling API group "scheduling.k8s.io".
I0813 02:49:37.645340  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.645398  110787 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0813 02:49:37.646892  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.647228  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.647404  110787 watch_cache.go:405] Replace watchCache (rev: 29207) 
I0813 02:49:37.648011  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.648882  110787 master.go:423] Skipping disabled API group "settings.k8s.io".
I0813 02:49:37.649244  110787 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.649350  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.649820  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.649998  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.650369  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.651776  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.651858  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.651886  110787 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0813 02:49:37.651908  110787 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0813 02:49:37.652025  110787 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.652131  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.652145  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.652174  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.652332  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.652620  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.652690  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.652737  110787 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0813 02:49:37.652765  110787 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.652796  110787 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0813 02:49:37.652826  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.652835  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.652861  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.652939  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.653248  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.653269  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.653371  110787 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0813 02:49:37.653418  110787 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.653470  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.653479  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.653498  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.653518  110787 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0813 02:49:37.653756  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.654022  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.654098  110787 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0813 02:49:37.654184  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.654255  110787 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.654273  110787 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0813 02:49:37.654322  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.654332  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.654359  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.654492  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.654761  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.654855  110787 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0813 02:49:37.654944  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.655014  110787 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.655110  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.655113  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.655151  110787 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0813 02:49:37.655161  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.655193  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.655654  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.655660  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.655692  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.655978  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.656059  110787 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0813 02:49:37.656072  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.656075  110787 master.go:434] Enabling API group "storage.k8s.io".
I0813 02:49:37.656088  110787 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0813 02:49:37.656288  110787 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.656360  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.656369  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.656398  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.656476  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.656933  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.656940  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.657052  110787 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0813 02:49:37.657084  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.657194  110787 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.657262  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.657271  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.657297  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.657309  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.657343  110787 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0813 02:49:37.657346  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.657654  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.657728  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.657782  110787 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0813 02:49:37.657904  110787 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0813 02:49:37.657916  110787 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.658001  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.658012  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.658073  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.658088  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.658120  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.658323  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.658444  110787 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0813 02:49:37.658482  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.658659  110787 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.658738  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.658751  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.658775  110787 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0813 02:49:37.658784  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.658831  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.659206  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.659311  110787 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0813 02:49:37.659407  110787 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.659465  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.659474  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.659572  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.659630  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.659656  110787 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0813 02:49:37.659830  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.660060  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.660121  110787 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0813 02:49:37.660132  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.660133  110787 master.go:434] Enabling API group "apps".
I0813 02:49:37.660207  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.660227  110787 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0813 02:49:37.660255  110787 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.660327  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.660340  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.660372  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.660495  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.660678  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.660704  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.660749  110787 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0813 02:49:37.660773  110787 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.660811  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.660817  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.660836  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.660869  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.661210  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.661435  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.661490  110787 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0813 02:49:37.661509  110787 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.661647  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.661661  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.661699  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.661734  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.661752  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.661882  110787 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0813 02:49:37.662039  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.662318  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.662414  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.662442  110787 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0813 02:49:37.662473  110787 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.662562  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.662573  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.662633  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.662699  110787 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0813 02:49:37.662785  110787 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0813 02:49:37.662803  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.662996  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.663199  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.663263  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.663344  110787 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0813 02:49:37.663358  110787 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0813 02:49:37.663374  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.663386  110787 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.663682  110787 client.go:354] parsed scheme: ""
I0813 02:49:37.663701  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:37.663731  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:37.663734  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.663829  110787 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0813 02:49:37.663913  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.664442  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:37.664484  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.664523  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.664657  110787 store.go:1342] Monitoring events count at <storage-prefix>//events
I0813 02:49:37.664679  110787 master.go:434] Enabling API group "events.k8s.io".
I0813 02:49:37.664720  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:37.664765  110787 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0813 02:49:37.664907  110787 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.665163  110787 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.665666  110787 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.665707  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.665788  110787 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.665947  110787 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.666037  110787 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.666214  110787 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.666293  110787 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.666479  110787 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.666618  110787 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.666930  110787 watch_cache.go:405] Replace watchCache (rev: 29208) 
I0813 02:49:37.667616  110787 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.667929  110787 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.679895  110787 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.680473  110787 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.682902  110787 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.683284  110787 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.683968  110787 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.684225  110787 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.685007  110787 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.685289  110787 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0813 02:49:37.685331  110787 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0813 02:49:37.685893  110787 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.686019  110787 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.686267  110787 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.687002  110787 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.687766  110787 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.688622  110787 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.688882  110787 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.689621  110787 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.690283  110787 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.690563  110787 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.691232  110787 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0813 02:49:37.691301  110787 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0813 02:49:37.691960  110787 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.692198  110787 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.692763  110787 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.693419  110787 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.693924  110787 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.694856  110787 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.695931  110787 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.696428  110787 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.696891  110787 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.697556  110787 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.698350  110787 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0813 02:49:37.698433  110787 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0813 02:49:37.699185  110787 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.700214  110787 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0813 02:49:37.700275  110787 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0813 02:49:37.701006  110787 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.701652  110787 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.701920  110787 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.702486  110787 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.703047  110787 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.703554  110787 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.704104  110787 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0813 02:49:37.704161  110787 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0813 02:49:37.704974  110787 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.705679  110787 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.706043  110787 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.707027  110787 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.707391  110787 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.707777  110787 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.708924  110787 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.709424  110787 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.709929  110787 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.711044  110787 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.711634  110787 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.712133  110787 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0813 02:49:37.712579  110787 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0813 02:49:37.712718  110787 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0813 02:49:37.713612  110787 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.714533  110787 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.715390  110787 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.716307  110787 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.717309  110787 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"53dca826-bc28-46fe-a149-c8e189aac34d", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0813 02:49:37.721663  110787 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.21719ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39222]
I0813 02:49:37.723940  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:37.723966  110787 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0813 02:49:37.723974  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:37.723981  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:37.723987  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:37.723992  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:37.724018  110787 httplog.go:90] GET /healthz: (222.855µs) 0 [Go-http-client/1.1 127.0.0.1:39220]
I0813 02:49:37.728925  110787 httplog.go:90] GET /api/v1/services: (4.822242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39222]
I0813 02:49:37.732695  110787 httplog.go:90] GET /api/v1/services: (895.402µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39222]
I0813 02:49:37.734597  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:37.734626  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:37.734637  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:37.734647  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:37.734654  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:37.734678  110787 httplog.go:90] GET /healthz: (191.91µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39222]
I0813 02:49:37.735941  110787 httplog.go:90] GET /api/v1/services: (863.044µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39222]
I0813 02:49:37.736043  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.404101ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39220]
I0813 02:49:37.737617  110787 httplog.go:90] POST /api/v1/namespaces: (1.212103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39220]
I0813 02:49:37.738199  110787 httplog.go:90] GET /api/v1/services: (2.227963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:37.739707  110787 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.670958ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39220]
I0813 02:49:37.741372  110787 httplog.go:90] POST /api/v1/namespaces: (1.239994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:37.742690  110787 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (809.301µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:37.744329  110787 httplog.go:90] POST /api/v1/namespaces: (1.242503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:37.824885  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:37.824925  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:37.824938  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:37.824947  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:37.824956  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:37.825028  110787 httplog.go:90] GET /healthz: (279.7µs) 0 [Go-http-client/1.1 127.0.0.1:39224]
I0813 02:49:37.835446  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:37.835484  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:37.835498  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:37.835509  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:37.835526  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:37.835573  110787 httplog.go:90] GET /healthz: (265.889µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:37.924769  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:37.924807  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:37.924819  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:37.924829  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:37.924838  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:37.924870  110787 httplog.go:90] GET /healthz: (302.792µs) 0 [Go-http-client/1.1 127.0.0.1:39224]
I0813 02:49:37.935337  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:37.935373  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:37.935384  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:37.935394  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:37.935426  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:37.935460  110787 httplog.go:90] GET /healthz: (212.656µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.024925  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:38.024957  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.024965  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.024972  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.024977  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.025046  110787 httplog.go:90] GET /healthz: (218.066µs) 0 [Go-http-client/1.1 127.0.0.1:39224]
I0813 02:49:38.035384  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:38.035418  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.035431  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.035442  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.035449  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.035491  110787 httplog.go:90] GET /healthz: (262.532µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.126313  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:38.126347  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.126360  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.126370  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.126377  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.126426  110787 httplog.go:90] GET /healthz: (262.865µs) 0 [Go-http-client/1.1 127.0.0.1:39224]
I0813 02:49:38.135284  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:38.135319  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.135338  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.135348  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.135356  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.135382  110787 httplog.go:90] GET /healthz: (237.091µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.224791  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:38.224830  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.224842  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.224853  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.224862  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.224901  110787 httplog.go:90] GET /healthz: (259.306µs) 0 [Go-http-client/1.1 127.0.0.1:39224]
I0813 02:49:38.235317  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:38.235355  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.235368  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.235378  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.235386  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.235411  110787 httplog.go:90] GET /healthz: (233.729µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.324749  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:38.324786  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.324798  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.324809  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.324818  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.324848  110787 httplog.go:90] GET /healthz: (296.74µs) 0 [Go-http-client/1.1 127.0.0.1:39224]
I0813 02:49:38.335367  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:38.335399  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.335411  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.335421  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.335429  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.335455  110787 httplog.go:90] GET /healthz: (208.55µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.424747  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:38.424783  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.424795  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.424805  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.424827  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.424869  110787 httplog.go:90] GET /healthz: (258.597µs) 0 [Go-http-client/1.1 127.0.0.1:39224]
I0813 02:49:38.435463  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:38.435496  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.435515  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.435525  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.435533  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.435568  110787 httplog.go:90] GET /healthz: (237.87µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.525326  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:38.525358  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.525370  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.525380  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.525388  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.525479  110787 httplog.go:90] GET /healthz: (281.715µs) 0 [Go-http-client/1.1 127.0.0.1:39224]
I0813 02:49:38.535362  110787 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0813 02:49:38.535392  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.535405  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.535414  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.535424  110787 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.535463  110787 httplog.go:90] GET /healthz: (217.627µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.565659  110787 client.go:354] parsed scheme: ""
I0813 02:49:38.565689  110787 client.go:354] scheme "" not registered, fallback to default scheme
I0813 02:49:38.565735  110787 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0813 02:49:38.565815  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:38.566261  110787 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0813 02:49:38.566349  110787 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0813 02:49:38.625566  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.625611  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.625623  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.625632  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.625695  110787 httplog.go:90] GET /healthz: (1.037382ms) 0 [Go-http-client/1.1 127.0.0.1:39224]
I0813 02:49:38.636241  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.636268  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.636279  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.636287  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.636330  110787 httplog.go:90] GET /healthz: (1.150251ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.724077  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.847162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39222]
I0813 02:49:38.724077  110787 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (3.563752ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.724192  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.155335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39388]
I0813 02:49:38.725682  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.252631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.726369  110787 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.607807ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.726525  110787 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.988298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39222]
I0813 02:49:38.726953  110787 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0813 02:49:38.728925  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.287101ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.729223  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.729241  110787 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0813 02:49:38.729252  110787 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0813 02:49:38.729259  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0813 02:49:38.729289  110787 httplog.go:90] GET /healthz: (3.502914ms) 0 [Go-http-client/1.1 127.0.0.1:39392]
I0813 02:49:38.729379  110787 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.255759ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39222]
I0813 02:49:38.730407  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (939.665µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.731460  110787 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.574071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39222]
I0813 02:49:38.731555  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (797.864µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.732823  110787 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0813 02:49:38.732847  110787 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0813 02:49:38.732953  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.063495ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39224]
I0813 02:49:38.734125  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (802.756µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39222]
I0813 02:49:38.734890  110787 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (8.143761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.735571  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.02419ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39222]
I0813 02:49:38.736225  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.736250  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:38.736302  110787 httplog.go:90] GET /healthz: (1.011869ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.737250  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.26758ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39222]
I0813 02:49:38.741747  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (4.04272ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.743940  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.795185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.744114  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0813 02:49:38.745124  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (799.253µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.748713  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.924765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.748905  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0813 02:49:38.750457  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.334197ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.752437  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.446296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.752789  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0813 02:49:38.753728  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (718.773µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.755337  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.148983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.755577  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0813 02:49:38.756476  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (729.634µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.758185  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.293817ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.758549  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0813 02:49:38.761016  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (941.388µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.764122  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.538889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.764389  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0813 02:49:38.765520  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (876.839µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.767372  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.30944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.767561  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0813 02:49:38.768427  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (654.859µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.770066  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.210748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.770316  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0813 02:49:38.771139  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (682.746µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.773272  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.729274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.773630  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0813 02:49:38.774889  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.012924ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.777114  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.612401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.777382  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0813 02:49:38.780086  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (2.515373ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.781984  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.292424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.782194  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0813 02:49:38.783290  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (939.241µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.785542  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.813975ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.785921  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0813 02:49:38.786976  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (782.317µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.788731  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.265879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.789063  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0813 02:49:38.790243  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (907.141µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.792212  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.587446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.792391  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0813 02:49:38.793356  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (812.602µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.794922  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.15309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.795207  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0813 02:49:38.796208  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (707.583µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.797767  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.221197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.797957  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0813 02:49:38.799174  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (976.241µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.801117  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.661318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.801346  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0813 02:49:38.803269  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.712791ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.805036  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.334136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.805342  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0813 02:49:38.806715  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (919.249µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.808792  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.738979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.808945  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0813 02:49:38.809913  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (778.844µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.811893  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.467344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.812067  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0813 02:49:38.812979  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (678.097µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.814669  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.369977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.814957  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0813 02:49:38.816144  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (958.597µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.820288  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.652583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.820566  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0813 02:49:38.821499  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (686.496µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.823657  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.549501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.823895  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0813 02:49:38.825060  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (933.878µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.825519  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.825546  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:38.825659  110787 httplog.go:90] GET /healthz: (1.234303ms) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:38.827653  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.09066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.827906  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0813 02:49:38.829168  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.026401ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.830963  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.318298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.831259  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0813 02:49:38.832538  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (960.493µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.834540  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.430215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.834972  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0813 02:49:38.835790  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.835825  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:38.835858  110787 httplog.go:90] GET /healthz: (711.146µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.836559  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.369036ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.839142  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.099504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.839695  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0813 02:49:38.841543  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.624766ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.844886  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.376365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.845259  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0813 02:49:38.846657  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.036832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.862324  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (14.522599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.862620  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0813 02:49:38.866742  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.828187ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.869186  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.861083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.869503  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0813 02:49:38.870688  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (922.15µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.872520  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.434934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.872796  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0813 02:49:38.874059  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (974.177µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.876013  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.522218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.876218  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0813 02:49:38.877452  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.025194ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.879576  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.717783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.879848  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0813 02:49:38.882940  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (2.778311ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.885038  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.665749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.885272  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0813 02:49:38.886513  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.00423ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.888701  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.645083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.889153  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0813 02:49:38.890304  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (860.374µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.892156  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.425932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.892330  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0813 02:49:38.893387  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (925.472µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.895339  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.45659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.895727  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0813 02:49:38.897119  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (884.618µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.899856  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.377042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.900164  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0813 02:49:38.902255  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (832.839µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.904137  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.304257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.904443  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0813 02:49:38.905887  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.296605ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.908291  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.942888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.908652  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0813 02:49:38.909784  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (936.015µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.912193  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.79327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.912715  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0813 02:49:38.913887  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (783.56µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.915836  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.541777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.916089  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0813 02:49:38.917134  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (745.481µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.921341  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.573771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.921684  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0813 02:49:38.922784  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (771.037µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.924416  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.2265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.924609  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0813 02:49:38.925643  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.925852  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:38.926055  110787 httplog.go:90] GET /healthz: (1.561353ms) 0 [Go-http-client/1.1 127.0.0.1:39392]
I0813 02:49:38.925760  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (871.948µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.928559  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.89374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.929051  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0813 02:49:38.930089  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (793.916µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.932086  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.582164ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.932431  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0813 02:49:38.933647  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (930.888µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.935664  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.451957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.935901  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0813 02:49:38.936970  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:38.936994  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:38.937026  110787 httplog.go:90] GET /healthz: (1.845772ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:38.937530  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.371439ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.939405  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.314923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.939673  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0813 02:49:38.940556  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (737.575µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.942525  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.420223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.942738  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0813 02:49:38.943712  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (809.917µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.945549  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.287657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.945783  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0813 02:49:38.946944  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (786.058µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.948846  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.560854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.949176  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0813 02:49:38.962843  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.45723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.983753  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.258989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:38.984148  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0813 02:49:39.002790  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.259172ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.023371  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.886452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.023654  110787 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0813 02:49:39.025375  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.025401  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.025451  110787 httplog.go:90] GET /healthz: (921.645µs) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:39.036400  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.036446  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.036482  110787 httplog.go:90] GET /healthz: (1.03914ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.042329  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (927.358µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.063427  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.883192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.063770  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0813 02:49:39.083528  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.107104ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.104196  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.698743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.104405  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0813 02:49:39.122693  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.246892ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.128200  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.128227  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.128260  110787 httplog.go:90] GET /healthz: (3.765324ms) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:39.135966  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.135990  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.136043  110787 httplog.go:90] GET /healthz: (828.709µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.143958  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.272035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.144205  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0813 02:49:39.162698  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.236626ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.183516  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.12543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.184014  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0813 02:49:39.202547  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.12858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.223149  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.676908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.223341  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0813 02:49:39.225429  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.225466  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.225541  110787 httplog.go:90] GET /healthz: (1.041313ms) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:39.236186  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.236214  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.236253  110787 httplog.go:90] GET /healthz: (1.117983ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.242385  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (974.753µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.263340  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.961013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.263641  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0813 02:49:39.282398  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (999.331µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.303421  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.97776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.303748  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0813 02:49:39.322572  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.131444ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.325242  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.325275  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.325307  110787 httplog.go:90] GET /healthz: (853.159µs) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:39.336010  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.336040  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.336099  110787 httplog.go:90] GET /healthz: (945.356µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.343164  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.726667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.343416  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0813 02:49:39.365056  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (3.583732ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.383337  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.919759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.383547  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0813 02:49:39.407345  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (5.684469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.422938  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.521172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.423191  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0813 02:49:39.425224  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.425249  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.425284  110787 httplog.go:90] GET /healthz: (820.135µs) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:39.436356  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.436395  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.436424  110787 httplog.go:90] GET /healthz: (1.278773ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.442886  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.021859ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.463306  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.852976ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.463492  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0813 02:49:39.482740  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.253377ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.503378  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.920125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.503695  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0813 02:49:39.522421  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (997.891µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.525396  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.525427  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.525475  110787 httplog.go:90] GET /healthz: (818.606µs) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:39.535983  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.536016  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.536045  110787 httplog.go:90] GET /healthz: (867.912µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.544052  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.660654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.544238  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0813 02:49:39.562801  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.420538ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.583879  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.381657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.584140  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0813 02:49:39.602898  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.415467ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.623517  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.03163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.623824  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0813 02:49:39.625354  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.625386  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.625418  110787 httplog.go:90] GET /healthz: (959.568µs) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:39.636172  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.636203  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.636264  110787 httplog.go:90] GET /healthz: (1.084871ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.642641  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.180222ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.664692  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.298659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.664917  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0813 02:49:39.682611  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.147998ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.720138  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (18.681705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.720739  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0813 02:49:39.722576  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.152457ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.725853  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.725980  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.726428  110787 httplog.go:90] GET /healthz: (1.436213ms) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:39.736349  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.736375  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.736413  110787 httplog.go:90] GET /healthz: (1.116998ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.744248  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.912708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.744883  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0813 02:49:39.762454  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.06682ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.783850  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.37757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.784089  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0813 02:49:39.802941  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.433837ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.823901  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.473405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.824137  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0813 02:49:39.825342  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.825366  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.825399  110787 httplog.go:90] GET /healthz: (942.607µs) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:39.836185  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.836214  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.836249  110787 httplog.go:90] GET /healthz: (1.051952ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.842470  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.041298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.863410  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.992028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.864238  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0813 02:49:39.882965  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.377096ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.903915  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.423014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.904157  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0813 02:49:39.922706  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.219798ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.925473  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.925502  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.925592  110787 httplog.go:90] GET /healthz: (889.363µs) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:39.939572  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:39.939662  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:39.939727  110787 httplog.go:90] GET /healthz: (4.385258ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.943042  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.663188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.943572  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0813 02:49:39.962843  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.382605ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.983448  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.987851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:39.983853  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0813 02:49:40.002959  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.400566ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.024122  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.571002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.024371  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0813 02:49:40.025405  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.025442  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.025478  110787 httplog.go:90] GET /healthz: (1.043173ms) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:40.036403  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.036440  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.036480  110787 httplog.go:90] GET /healthz: (1.169125ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.042712  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.28319ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.064007  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.532182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.064272  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0813 02:49:40.082877  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.296496ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.103451  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.965972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.104084  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0813 02:49:40.122995  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.428092ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.125670  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.125871  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.126138  110787 httplog.go:90] GET /healthz: (1.471454ms) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:40.136467  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.136499  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.136543  110787 httplog.go:90] GET /healthz: (1.223116ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.143626  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.188734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.144015  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0813 02:49:40.162802  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.338872ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.184718  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.030884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.185070  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0813 02:49:40.203218  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.663751ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.223839  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.357041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.224106  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0813 02:49:40.225447  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.225476  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.225504  110787 httplog.go:90] GET /healthz: (1.005079ms) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:40.236721  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.236754  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.236824  110787 httplog.go:90] GET /healthz: (1.529949ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.242704  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.251775ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.263761  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.259745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.264356  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0813 02:49:40.283784  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (2.2647ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.303762  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.270752ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.304021  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0813 02:49:40.322894  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.443077ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.325522  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.325695  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.325939  110787 httplog.go:90] GET /healthz: (1.416602ms) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:40.336250  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.336513  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.336814  110787 httplog.go:90] GET /healthz: (1.563498ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.343543  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.101162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.344254  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0813 02:49:40.363042  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.602448ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.384250  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.401807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.384956  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0813 02:49:40.402739  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.325623ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.424891  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.396099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.425111  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0813 02:49:40.426015  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.426041  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.426073  110787 httplog.go:90] GET /healthz: (1.492455ms) 0 [Go-http-client/1.1 127.0.0.1:39392]
I0813 02:49:40.435991  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.436034  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.436093  110787 httplog.go:90] GET /healthz: (886.692µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.455798  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (14.37383ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.469003  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.668875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.469261  110787 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0813 02:49:40.482828  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.319886ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.485995  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.524163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.503311  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.830237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.503531  110787 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0813 02:49:40.523443  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.853278ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.525418  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.328857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.526053  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.526113  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.526152  110787 httplog.go:90] GET /healthz: (1.642851ms) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:40.536372  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.536397  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.536433  110787 httplog.go:90] GET /healthz: (1.179256ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.543865  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.422573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.544128  110787 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0813 02:49:40.562568  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.112222ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.564247  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.140691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.583330  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.823254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.583834  110787 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0813 02:49:40.603262  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.71375ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.605213  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.424716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.623624  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.623917  110787 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0813 02:49:40.625232  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.625258  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.625300  110787 httplog.go:90] GET /healthz: (919.337µs) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:40.636102  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.636129  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.636169  110787 httplog.go:90] GET /healthz: (969.419µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.642549  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.127372ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.644292  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.210197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.663909  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.381247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.664357  110787 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0813 02:49:40.683078  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.300094ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.684934  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.713903  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.770055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.714139  110787 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0813 02:49:40.722605  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.124594ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.724670  110787 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.321626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.725216  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.725240  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.725286  110787 httplog.go:90] GET /healthz: (844.164µs) 0 [Go-http-client/1.1 127.0.0.1:39392]
I0813 02:49:40.736425  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.736724  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.737298  110787 httplog.go:90] GET /healthz: (1.962266ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.743337  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.925255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.743545  110787 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0813 02:49:40.762865  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.317572ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.765162  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.436205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.783891  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.418383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.784280  110787 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0813 02:49:40.802855  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.312318ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.804651  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.307617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.823573  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.112212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.823799  110787 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0813 02:49:40.825486  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.825529  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.825615  110787 httplog.go:90] GET /healthz: (1.006917ms) 0 [Go-http-client/1.1 127.0.0.1:39392]
I0813 02:49:40.836054  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.836081  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.836150  110787 httplog.go:90] GET /healthz: (901.664µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.842416  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (993.928µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.843997  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.122995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.863902  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.407987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.864143  110787 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0813 02:49:40.883394  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.851247ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.885347  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.368172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.903915  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.394779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.904246  110787 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0813 02:49:40.922813  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.294999ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.925040  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.680579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:40.925550  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.925659  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.925819  110787 httplog.go:90] GET /healthz: (1.162737ms) 0 [Go-http-client/1.1 127.0.0.1:39390]
I0813 02:49:40.936462  110787 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0813 02:49:40.936769  110787 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0813 02:49:40.937095  110787 httplog.go:90] GET /healthz: (1.81506ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.943328  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.959084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.943510  110787 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0813 02:49:40.962661  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.276418ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.964760  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.243416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:40.990280  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (8.389708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:41.001205  110787 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0813 02:49:41.002690  110787 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (990.376µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:41.004564  110787 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.306735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:41.023706  110787 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.26998ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:41.023935  110787 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0813 02:49:41.025494  110787 httplog.go:90] GET /healthz: (936.912µs) 200 [Go-http-client/1.1 127.0.0.1:39390]
W0813 02:49:41.026209  110787 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 02:49:41.026225  110787 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 02:49:41.026242  110787 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 02:49:41.026269  110787 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 02:49:41.026282  110787 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 02:49:41.026293  110787 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 02:49:41.026303  110787 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 02:49:41.026312  110787 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 02:49:41.026323  110787 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 02:49:41.026371  110787 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0813 02:49:41.026385  110787 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0813 02:49:41.026406  110787 factory.go:299] Creating scheduler from algorithm provider 'DefaultProvider'
I0813 02:49:41.026415  110787 factory.go:387] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0813 02:49:41.026892  110787 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.026913  110787 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.027226  110787 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.027240  110787 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.027485  110787 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.027495  110787 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.027927  110787 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.027941  110787 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.028345  110787 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.028356  110787 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.029923  110787 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (649.272µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39604]
I0813 02:49:41.030422  110787 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (393.978µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39600]
I0813 02:49:41.030947  110787 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (449.041µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39602]
I0813 02:49:41.031413  110787 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (341.887µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:49:41.031878  110787 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (390.778µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:49:41.033362  110787 get.go:250] Starting watch for /api/v1/nodes, rv=29198 labels= fields= timeout=8m38s
I0813 02:49:41.033451  110787 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=29208 labels= fields= timeout=5m25s
I0813 02:49:41.033735  110787 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=29208 labels= fields= timeout=7m39s
I0813 02:49:41.033910  110787 get.go:250] Starting watch for /api/v1/services, rv=29198 labels= fields= timeout=6m33s
I0813 02:49:41.036414  110787 get.go:250] Starting watch for /api/v1/pods, rv=29198 labels= fields= timeout=6m25s
I0813 02:49:41.036536  110787 httplog.go:90] GET /healthz: (1.318385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39602]
I0813 02:49:41.036969  110787 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.036995  110787 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.037282  110787 reflector.go:122] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.037301  110787 reflector.go:160] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.037936  110787 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (423.06µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0813 02:49:41.038523  110787 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (419.994µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
I0813 02:49:41.038548  110787 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=29208 labels= fields= timeout=7m35s
I0813 02:49:41.039160  110787 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=29208 labels= fields= timeout=6m36s
I0813 02:49:41.039370  110787 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.039388  110787 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.039411  110787 httplog.go:90] GET /api/v1/namespaces/default: (1.819998ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39602]
I0813 02:49:41.040126  110787 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.040143  110787 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.040491  110787 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.040510  110787 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.040938  110787 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (410.1µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39602]
I0813 02:49:41.041777  110787 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=29198 labels= fields= timeout=7m18s
I0813 02:49:41.042063  110787 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (489.167µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39614]
I0813 02:49:41.042248  110787 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (351.971µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39602]
I0813 02:49:41.042872  110787 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=29199 labels= fields= timeout=7m14s
I0813 02:49:41.043041  110787 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=29207 labels= fields= timeout=8m49s
I0813 02:49:41.045087  110787 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.045107  110787 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0813 02:49:41.046302  110787 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (445.135µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39622]
I0813 02:49:41.046716  110787 httplog.go:90] POST /api/v1/namespaces: (5.672834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:49:41.047359  110787 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=29197 labels= fields= timeout=7m28s
I0813 02:49:41.049097  110787 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (869.825µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:49:41.053142  110787 httplog.go:90] POST /api/v1/namespaces/default/services: (3.687116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:49:41.054390  110787 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (859.531µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:49:41.056281  110787 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.566922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:49:41.126844  110787 shared_informer.go:211] caches populated
I0813 02:49:41.227093  110787 shared_informer.go:211] caches populated
I0813 02:49:41.327249  110787 shared_informer.go:211] caches populated
I0813 02:49:41.427477  110787 shared_informer.go:211] caches populated
I0813 02:49:41.527710  110787 shared_informer.go:211] caches populated
I0813 02:49:41.627887  110787 shared_informer.go:211] caches populated
I0813 02:49:41.728094  110787 shared_informer.go:211] caches populated
I0813 02:49:41.828292  110787 shared_informer.go:211] caches populated
I0813 02:49:41.928444  110787 shared_informer.go:211] caches populated
I0813 02:49:42.028666  110787 shared_informer.go:211] caches populated
I0813 02:49:42.032564  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:42.033107  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:42.035959  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:42.038410  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:42.039017  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:42.041432  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:42.047195  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:42.128882  110787 shared_informer.go:211] caches populated
I0813 02:49:42.229112  110787 shared_informer.go:211] caches populated
I0813 02:49:42.231885  110787 httplog.go:90] POST /api/v1/nodes: (2.245626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:49:42.232087  110787 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0813 02:49:42.234607  110787 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods: (2.238532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:49:42.234789  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/waiting-pod
I0813 02:49:42.234814  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/waiting-pod
I0813 02:49:42.234978  110787 scheduler_binder.go:256] AssumePodVolumes for pod "preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/waiting-pod", node "test-node-0"
I0813 02:49:42.235003  110787 scheduler_binder.go:266] AssumePodVolumes for pod "preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/waiting-pod", node "test-node-0": all PVCs bound and nothing to do
I0813 02:49:42.235091  110787 framework.go:558] waiting for 30s for pod "waiting-pod" at permit
I0813 02:49:42.237165  110787 factory.go:622] Attempting to bind signalling-pod to test-node-0
I0813 02:49:42.237193  110787 factory.go:622] Attempting to bind waiting-pod to test-node-0
I0813 02:49:42.237666  110787 scheduler.go:447] Failed to bind pod: permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/signalling-pod
E0813 02:49:42.237690  110787 scheduler.go:449] scheduler cache ForgetPod failed: pod 09b7ca35-a017-438b-92ef-cf6e9df76297 wasn't assumed so cannot be forgotten
E0813 02:49:42.237706  110787 scheduler.go:605] error binding pod: Post http://127.0.0.1:36341/api/v1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/pods/signalling-pod/binding: dial tcp 127.0.0.1:36341: connect: connection refused
E0813 02:49:42.237730  110787 factory.go:573] Error scheduling permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/signalling-pod: Post http://127.0.0.1:36341/api/v1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/pods/signalling-pod/binding: dial tcp 127.0.0.1:36341: connect: connection refused; retrying
I0813 02:49:42.237762  110787 factory.go:631] Updating pod condition for permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0813 02:49:42.238058  110787 factory.go:606] Error getting pod permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/signalling-pod for retry: Get http://127.0.0.1:36341/api/v1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/pods/signalling-pod: dial tcp 127.0.0.1:36341: connect: connection refused; retrying...
E0813 02:49:42.238068  110787 scheduler.go:280] Error updating the condition of the pod permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/signalling-pod: Put http://127.0.0.1:36341/api/v1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/pods/signalling-pod/status: dial tcp 127.0.0.1:36341: connect: connection refused
E0813 02:49:42.238424  110787 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36341/apis/events.k8s.io/v1beta1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/events: dial tcp 127.0.0.1:36341: connect: connection refused' (may retry after sleeping)
I0813 02:49:42.239122  110787 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/waiting-pod/binding: (1.675971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:49:42.239332  110787 scheduler.go:614] pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/waiting-pod is bound successfully on node "test-node-0", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>.".
I0813 02:49:42.241868  110787 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/events: (2.15548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
E0813 02:49:42.438627  110787 factory.go:606] Error getting pod permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/signalling-pod for retry: Get http://127.0.0.1:36341/api/v1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/pods/signalling-pod: dial tcp 127.0.0.1:36341: connect: connection refused; retrying...
E0813 02:49:42.839188  110787 factory.go:606] Error getting pod permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/signalling-pod for retry: Get http://127.0.0.1:36341/api/v1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/pods/signalling-pod: dial tcp 127.0.0.1:36341: connect: connection refused; retrying...
I0813 02:49:43.032794  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:43.033275  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:43.036049  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:43.038554  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:43.039163  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:43.041594  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:43.047344  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 02:49:43.639726  110787 factory.go:606] Error getting pod permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/signalling-pod for retry: Get http://127.0.0.1:36341/api/v1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/pods/signalling-pod: dial tcp 127.0.0.1:36341: connect: connection refused; retrying...
I0813 02:49:44.032979  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:44.033645  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:44.036169  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:44.038711  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:44.039309  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:44.041935  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:44.047519  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:45.033191  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:45.033816  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:45.036359  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:45.038872  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:45.039463  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:45.042102  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:45.047703  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 02:49:45.240333  110787 factory.go:606] Error getting pod permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/signalling-pod for retry: Get http://127.0.0.1:36341/api/v1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/pods/signalling-pod: dial tcp 127.0.0.1:36341: connect: connection refused; retrying...
I0813 02:49:46.033374  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:46.033970  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:46.036878  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:46.039015  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:46.039623  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:46.042252  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:46.047846  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:47.033670  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:47.034065  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:47.037300  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:47.039154  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:47.039941  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:47.042407  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:47.047995  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:48.033855  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:48.034245  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:48.037433  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:48.039299  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:48.040096  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:48.042542  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:48.048132  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 02:49:48.440922  110787 factory.go:606] Error getting pod permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/signalling-pod for retry: Get http://127.0.0.1:36341/api/v1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/pods/signalling-pod: dial tcp 127.0.0.1:36341: connect: connection refused; retrying...
I0813 02:49:49.034052  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:49.034442  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:49.037861  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:49.039457  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:49.040253  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:49.042703  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:49.048275  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:50.034258  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:50.034618  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:50.038013  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:50.039606  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:50.040424  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:50.042918  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:50.048453  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:51.034826  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:51.035693  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:51.038159  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:51.039343  110787 httplog.go:90] GET /api/v1/namespaces/default: (2.062806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:49:51.039740  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:51.041426  110787 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.655848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:49:51.042625  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:51.043045  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:51.043967  110787 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.711139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:49:51.048729  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:52.034962  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:52.035850  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:52.038318  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:52.040114  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:52.042790  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:52.043196  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:52.048907  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 02:49:52.777749  110787 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36341/apis/events.k8s.io/v1beta1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/events: dial tcp 127.0.0.1:36341: connect: connection refused' (may retry after sleeping)
I0813 02:49:53.035126  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:53.036018  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:53.038478  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:53.040218  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:53.042946  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:53.044376  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:53.049074  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:54.035295  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:54.036173  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:54.038643  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:54.040390  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:54.043122  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:54.044522  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:54.049276  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 02:49:54.841505  110787 factory.go:606] Error getting pod permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/signalling-pod for retry: Get http://127.0.0.1:36341/api/v1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/pods/signalling-pod: dial tcp 127.0.0.1:36341: connect: connection refused; retrying...
I0813 02:49:55.035476  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:55.037044  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:55.038831  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:55.040576  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:55.043300  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:55.044657  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:55.049417  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:56.035696  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:56.039021  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:56.040643  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:56.040789  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:56.043640  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:56.044811  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:56.049578  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:57.035836  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:57.039199  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:57.040743  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:57.041460  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:57.043770  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:57.044974  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:57.050431  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:58.036037  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:58.039367  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:58.040925  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:58.041562  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:58.043896  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:58.045052  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:58.050652  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:59.036233  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:59.039522  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:59.041085  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:59.041712  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:59.044325  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:59.045176  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:49:59.050811  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:00.036420  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:00.039719  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:00.041185  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:00.041854  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:00.044952  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:00.045299  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:00.051291  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:01.036697  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:01.038966  110787 httplog.go:90] GET /api/v1/namespaces/default: (1.487917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:50:01.039900  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:01.040880  110787 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.515489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:50:01.041747  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:01.042005  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:01.042291  110787 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (996.302µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:50:01.045144  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:01.045445  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:01.051481  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:02.037092  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:02.040019  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:02.042089  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:02.042193  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:02.045737  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:02.045792  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:02.051657  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:03.037272  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:03.040212  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:03.042237  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:03.042379  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:03.045935  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:03.045963  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:03.051813  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:04.037511  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:04.040393  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:04.042369  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:04.042552  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:04.046063  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:04.046088  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:04.051980  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 02:50:04.859108  110787 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36341/apis/events.k8s.io/v1beta1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/events: dial tcp 127.0.0.1:36341: connect: connection refused' (may retry after sleeping)
I0813 02:50:05.037756  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:05.040547  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:05.042497  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:05.042639  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:05.046206  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:05.046223  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:05.052172  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:06.037888  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:06.040639  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:06.042645  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:06.042756  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:06.046387  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:06.046417  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:06.052350  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:07.038068  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:07.040790  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:07.042970  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:07.043034  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:07.046493  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:07.046535  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:07.052504  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0813 02:50:07.642180  110787 factory.go:606] Error getting pod permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/signalling-pod for retry: Get http://127.0.0.1:36341/api/v1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/pods/signalling-pod: dial tcp 127.0.0.1:36341: connect: connection refused; retrying...
I0813 02:50:08.038291  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:08.040971  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:08.043109  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:08.043125  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:08.046646  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:08.046651  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:08.052818  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:09.038487  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:09.041158  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:09.043268  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:09.043478  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:09.046787  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:09.046833  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:09.052996  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:10.038715  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:10.041303  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:10.043406  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:10.043702  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:10.046869  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:10.047017  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:10.054688  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:11.039791  110787 httplog.go:90] GET /api/v1/namespaces/default: (2.06705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:50:11.039984  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:11.041485  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:11.041876  110787 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.587787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:50:11.043534  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:11.043791  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:11.044175  110787 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.835528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:50:11.047091  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:11.047193  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:11.054907  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:12.040354  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:12.041704  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:12.043696  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:12.043919  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:12.047211  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:12.047343  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:12.054997  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:12.238232  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:12.238282  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:12.238416  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:12.238452  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:12.238536  110787 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods: (3.027513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:50:12.241449  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.120771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43322]
I0813 02:50:12.243130  110787 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod/status: (4.147497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39612]
I0813 02:50:12.243236  110787 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/events: (4.261542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:12.245909  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.038395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:12.246189  110787 generic_scheduler.go:1193] Node test-node-0 is a potential node for preemption.
I0813 02:50:12.248751  110787 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod/status: (2.138744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:12.251679  110787 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/waiting-pod: (2.60496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:12.253608  110787 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/events: (1.433697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:12.341045  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.631388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:12.440980  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.528938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:12.540943  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.604224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:12.640862  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.520522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:12.741149  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.705795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:12.841561  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.986154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:12.941323  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.858149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:13.040520  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:13.041404  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.975538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:13.041895  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:13.043893  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:13.044117  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:13.047385  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:13.047499  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:13.055183  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:13.141971  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.515389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:13.241545  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.002723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:13.341277  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.747418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:13.441141  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.677477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:13.541391  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.953711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:13.641174  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.624033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:13.741005  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.592374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:13.841125  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.757581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:13.941156  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.710437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:14.031201  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:14.031241  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:14.031380  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:14.031425  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:14.034498  110787 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/events: (1.992923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:14.034539  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.033611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43322]
I0813 02:50:14.034561  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.815208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:14.040476  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.155391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:14.040901  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:14.042016  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:14.044040  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:14.044321  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:14.047510  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:14.047675  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:14.055299  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:14.144292  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (3.878152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:14.241035  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.661288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:14.340836  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.486853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:14.441538  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.165328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:14.540881  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.587859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:14.643897  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (3.781521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:14.741027  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.703203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:14.841871  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.203528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:14.944096  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (4.638241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:15.041067  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:15.042234  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.769219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:15.043295  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:15.043475  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:15.043509  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:15.043728  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:15.043794  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:15.044269  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:15.044548  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:15.047760  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (3.221091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:15.047838  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:15.047921  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:15.048266  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (3.856201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:15.048457  110787 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/events/preemptor-pod.15ba5babd4bdda60: (2.633826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43936]
I0813 02:50:15.055475  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:15.142015  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.492695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:15.241799  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.308401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:15.342048  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.479976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:15.441936  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.416884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:15.542963  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (3.272454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:15.642706  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.974985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
E0813 02:50:15.706777  110787 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36341/apis/events.k8s.io/v1beta1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/events: dial tcp 127.0.0.1:36341: connect: connection refused' (may retry after sleeping)
I0813 02:50:15.742079  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.544828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:15.842419  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.519424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:15.944325  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (4.564926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:16.041330  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:16.042572  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (3.089138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:16.043510  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:16.043777  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:16.043802  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:16.044071  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:16.044159  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:16.044545  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:16.044721  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:16.048092  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:16.048177  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:16.048363  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.833362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:16.049513  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (3.666129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:16.055934  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:16.145361  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (5.773456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:16.242010  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.510556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:16.344081  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (4.628325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:16.446053  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (6.196881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:16.547175  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (7.689537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:16.645408  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (5.692386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:16.741664  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.075338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:16.841823  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.36265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:16.941956  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.280788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:17.042029  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.492878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:17.042212  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:17.043729  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:17.043893  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:17.043920  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:17.044099  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:17.044179  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:17.044936  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:17.045046  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:17.046731  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.97604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:17.046917  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.495234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:17.048244  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:17.048416  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:17.056122  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:17.142159  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.596785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:17.242171  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.579639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:17.342530  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.819501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:17.441909  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.432283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:17.541917  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.374853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:17.642631  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.953125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:17.741872  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.019233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:17.842751  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.92535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:17.942357  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.381802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:18.041809  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.22365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:18.042419  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:18.043921  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:18.044123  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:18.044146  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:18.044316  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:18.044795  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:18.045806  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:18.051813  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:18.051942  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:18.051984  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:18.056722  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:18.062558  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (17.192991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:18.063125  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (17.996092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:18.141765  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.34448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:18.242853  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.489812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:18.341809  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.306988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:18.441674  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.195685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:18.541302  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.00945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:18.641091  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.789442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:18.744901  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (5.632951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:18.841967  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.569757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:18.941159  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.845358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:19.041335  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.945182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:19.042609  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:19.044136  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:19.044263  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:19.044283  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:19.044433  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:19.044491  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:19.045958  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:19.047117  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.254933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:19.047723  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.988449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:19.051987  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:19.052131  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:19.052147  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:19.056904  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:19.141139  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.760536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:19.241068  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.692317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:19.341204  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.758395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:19.440856  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.524698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:19.541026  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.401363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:19.640869  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.567746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:19.740910  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.563362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:19.841736  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.647028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:19.940917  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.550156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:20.041037  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.721099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:20.042763  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:20.044281  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:20.044368  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:20.044478  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:20.044729  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:20.044868  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:20.046119  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:20.046680  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.556398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:20.047055  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.307602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:20.052170  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:20.052474  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:20.052769  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:20.053018  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:20.053128  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:20.052303  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:20.052338  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:20.054721  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.30255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:20.055533  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.434586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:20.057056  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:20.141330  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.898683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:20.241137  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.761263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:20.340820  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.497933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:20.441405  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.007579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:20.541274  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.902831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:20.641152  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.804325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:20.740915  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.571241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:20.841004  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.670657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:20.941055  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.607123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:21.039430  110787 httplog.go:90] GET /api/v1/namespaces/default: (1.741106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:21.040635  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.36741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:21.041184  110787 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.078666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:21.042519  110787 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (936.111µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:21.042927  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:21.044433  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:21.046259  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:21.052527  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:21.054173  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:21.054184  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:21.057347  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:21.140883  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.480224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:21.241778  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.383161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:21.342439  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.906305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:21.442006  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.650459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:21.541206  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.857449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:21.641888  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.412271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:21.740916  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.608317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:21.841094  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.714602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:21.940892  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.545464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:22.041437  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.051345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:22.043188  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:22.044692  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:22.045085  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:22.045106  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:22.045189  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:22.045232  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:22.046430  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:22.046753  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.16264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:22.047720  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.649267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:22.052898  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:22.054318  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:22.054321  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:22.057492  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:22.141116  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.772514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:22.241023  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.627702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:22.340691  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.309344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:22.440799  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.453558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:22.540881  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.564446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:22.640866  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.534056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:22.740828  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.52263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:22.840940  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.592236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:22.940909  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.572357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:23.041817  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.448085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:23.043463  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:23.044864  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:23.045017  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:23.045041  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:23.045181  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:23.045220  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:23.046556  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:23.047436  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.002429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:23.047748  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.199294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:23.053093  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:23.054462  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:23.054834  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:23.057691  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:23.141106  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.693642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:23.240848  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.545259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:23.341462  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.091603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:23.440869  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.602829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:23.540856  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.537532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:23.649804  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (10.46111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:23.740870  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.560307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:23.840875  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.531525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:23.941867  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.561485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:24.040927  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.507089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:24.043711  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:24.045039  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:24.045149  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:24.045164  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:24.045345  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:24.045409  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:24.046709  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:24.046909  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.06774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:24.047166  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.16897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:24.053286  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:24.054682  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:24.054978  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:24.057850  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:24.141940  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.919058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:24.241037  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.665286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:24.340898  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.560946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:24.440938  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.573587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:24.540731  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.401727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:24.640990  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.641676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:24.741478  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.775487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:24.841480  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.656703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:24.940916  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.525736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:25.041026  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.588092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:25.043890  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:25.045219  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:25.045322  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:25.045341  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:25.045548  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:25.045653  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:25.046864  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:25.047493  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.415068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:25.047513  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.397639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:25.053448  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:25.054811  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:25.055112  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:25.058029  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:25.141011  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.65855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:25.240963  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.52161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:25.341717  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.358718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:25.441010  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.66944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:25.540869  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.553552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:25.640856  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.569778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:25.740867  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.49776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:25.841286  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.860124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:25.941325  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.972248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:26.041005  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.596852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:26.044104  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:26.045415  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:26.045513  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:26.045531  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:26.045664  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:26.045708  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:26.047359  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.461881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:26.047491  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.456011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:26.047746  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:26.053689  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:26.054950  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:26.055484  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:26.058187  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:26.140937  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.648898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:26.240894  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.558705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:26.340962  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.581247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:26.441043  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.685468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:26.540782  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.435331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:26.641642  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.289487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:26.741719  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.158591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:26.841173  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.489431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:26.941332  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.565713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:27.040794  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.467231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:27.044250  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:27.045571  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:27.045749  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:27.045770  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:27.045912  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:27.045964  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:27.047770  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.459755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:27.048005  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.254393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:27.048189  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:27.053858  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:27.055073  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:27.055663  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:27.058327  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:27.141128  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.777803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:27.241032  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.649513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
E0813 02:50:27.316381  110787 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36341/apis/events.k8s.io/v1beta1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/events: dial tcp 127.0.0.1:36341: connect: connection refused' (may retry after sleeping)
I0813 02:50:27.340872  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.532069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:27.440770  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.45233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:27.541060  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.754576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:27.641052  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.662129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:27.744874  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (5.440009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:27.842128  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.337753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:27.941032  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.614915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:28.040890  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.518708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:28.044405  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:28.045695  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:28.045806  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:28.045821  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:28.045907  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:28.045934  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:28.047278  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.128738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:28.047384  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.106972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:28.048356  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:28.054013  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:28.055222  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:28.055812  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:28.058481  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:28.140915  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.629725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:28.243857  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (3.768753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:28.341718  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.324541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:28.440791  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.417692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:28.542020  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.025049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:28.641060  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.652348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:28.741260  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.913004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:28.841933  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.024613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:28.941572  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.070906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:29.033878  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:29.033919  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:29.034132  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:29.034180  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:29.035880  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.449619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:29.036295  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.120569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:29.040476  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.230457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:29.044540  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:29.045865  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:29.048496  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:29.054173  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:29.055361  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:29.055957  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:29.058794  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:29.140979  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.542565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:29.240792  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.437064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:29.342193  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.753879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:29.440693  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.347268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:29.541382  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.481255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:29.640710  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.415227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:29.740656  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.319684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:29.841063  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.747673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:29.941078  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.74324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:30.041000  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.652169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:30.044712  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:30.046012  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:30.046149  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:30.046213  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:30.046341  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:30.046391  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:30.048198  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.513894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:30.048828  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.960653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:30.048946  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:30.054342  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:30.055511  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:30.056099  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:30.058956  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:30.140777  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.437409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:30.241722  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.325541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:30.344863  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (5.497453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:30.440767  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.499724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:30.540675  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.429037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:30.641314  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.981269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:30.740842  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.534786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:30.840910  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.600675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:30.941118  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.746887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:31.039147  110787 httplog.go:90] GET /api/v1/namespaces/default: (1.350395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:31.040665  110787 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.17216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:31.041339  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.059478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:31.042429  110787 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (905.801µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:31.044891  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:31.046180  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:31.046319  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:31.046458  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:31.046672  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:31.046764  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:31.048114  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.128343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:31.048608  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.693108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:31.049059  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:31.054532  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:31.055706  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:31.056200  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:31.059114  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:31.140950  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.600252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:31.240664  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.359447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:31.341191  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.858036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:31.442768  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (3.411166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:31.540860  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.618616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:31.640792  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.469419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:31.741305  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.512181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:31.840812  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.443353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:31.940680  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.372908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:32.040671  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.395757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:32.045182  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:32.046341  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:32.046445  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:32.046462  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:32.046604  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:32.046655  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:32.048430  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.404741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:32.048899  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.539911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:32.049178  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:32.054632  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:32.055835  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:32.056321  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:32.059222  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:32.140910  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.534809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:32.240524  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.230673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:32.340766  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.392174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:32.440611  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.222072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:32.540888  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.383871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:32.640891  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.578529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:32.740832  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.484548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:32.840389  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.110944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:32.940839  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.484781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:33.040539  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.253332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:33.045365  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:33.046540  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:33.046650  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:33.046674  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:33.046787  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:33.046824  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:33.048711  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.438549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:33.048976  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.272332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:33.049278  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:33.054779  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:33.055966  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:33.056452  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:33.059353  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:33.140888  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.429287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:33.240594  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.349604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
E0813 02:50:33.242703  110787 factory.go:606] Error getting pod permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/signalling-pod for retry: Get http://127.0.0.1:36341/api/v1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/pods/signalling-pod: dial tcp 127.0.0.1:36341: connect: connection refused; retrying...
I0813 02:50:33.341048  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.660298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:33.441103  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.787821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:33.540761  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.437001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:33.641297  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.926067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:33.740883  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.560574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:33.840701  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.387798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:33.941006  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.513039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:34.040912  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.628376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:34.045544  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:34.046707  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:34.046817  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:34.046836  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:34.046971  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:34.047013  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:34.048344  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.068312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:34.048473  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (946.008µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:34.049433  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:34.054920  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:34.056119  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:34.056656  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:34.059518  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:34.141087  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.778845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:34.240788  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.456597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:34.341051  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.708777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:34.440799  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.478879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:34.540742  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.450241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:34.640778  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.456102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:34.740502  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.266775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:34.842414  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.887901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:34.941001  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.667414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:35.041045  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.695093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:35.045840  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:35.046884  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:35.046997  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:35.047010  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:35.047142  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:35.047184  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:35.048893  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.317374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:35.049041  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.615248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:35.049680  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:35.055076  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:35.056284  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:35.056810  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:35.059658  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:35.140877  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.529793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:35.240923  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.619827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:35.341028  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.624084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:35.440825  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.479128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:35.540747  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.426489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:35.640884  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.535349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:35.740977  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.585759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:35.840766  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.498891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:35.940715  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.44109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:36.040809  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.463479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:36.046015  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:36.047039  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:36.047148  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:36.047166  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:36.047294  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:36.047340  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:36.048856  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.125849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:36.049469  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.681174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:36.049824  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:36.055252  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:36.056428  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:36.056983  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:36.059838  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:36.140785  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.48238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:36.240870  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.544761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:36.340565  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.21774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:36.440816  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.509993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:36.540710  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.391709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:36.640911  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.547278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:36.740916  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.62789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:36.840804  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.490359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:36.941218  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.835778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:37.040822  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.522484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:37.046207  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:37.047224  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:37.047333  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:37.047351  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:37.047464  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:37.047544  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:37.048988  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.230861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:37.049822  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.95537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:37.049913  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:37.055405  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:37.056627  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:37.057099  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:37.059976  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:37.140793  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.430789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:37.240696  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.404164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:37.340766  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.326539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:37.440760  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.395267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:37.545842  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (6.016863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:37.640890  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.554888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:37.740773  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.491893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:37.746033  110787 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.119255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
E0813 02:50:37.747132  110787 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36341/apis/events.k8s.io/v1beta1/namespaces/permit-plugin8ef2bcd5-92aa-49f9-af1d-55d4360477d4/events: dial tcp 127.0.0.1:36341: connect: connection refused' (may retry after sleeping)
I0813 02:50:37.747397  110787 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.025773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:37.748692  110787 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (781.954µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:37.841266  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.897474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:37.940795  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.498832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:38.041037  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.613278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:38.046425  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:38.047406  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:38.047511  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:38.047529  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:38.047684  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:38.047729  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:38.049162  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.146532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:38.049163  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.227384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:38.050052  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:38.055601  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:38.056792  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:38.057263  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:38.060140  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:38.141715  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.914102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:38.241140  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.645856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:38.341556  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.998694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:38.441241  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.812895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:38.541481  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.049306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:38.641223  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.77815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:38.741564  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.111063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:38.841514  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.943171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:38.941113  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.62093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:39.041754  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.191331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:39.046853  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:39.047662  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:39.047784  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:39.047794  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:39.047916  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:39.047957  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:39.049956  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.660541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:39.049956  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.746412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:39.050269  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:39.056226  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:39.057136  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:39.057452  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:39.060337  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:39.141775  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.33302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:39.241272  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.896629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:39.341950  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.40827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:39.442483  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (3.070591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:39.541156  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.614806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:39.641057  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.611805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:39.741141  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.715144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:39.841790  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.367818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:39.941579  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.118823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:40.041518  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.988272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:40.047008  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:40.047847  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:40.047956  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:40.047966  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:40.048090  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:40.048140  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:40.049969  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.550524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:40.050127  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.618904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:40.050496  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:40.056501  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:40.057379  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:40.057655  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:40.060512  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:40.141929  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.35892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:40.241462  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.022841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:40.341107  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.711618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:40.440941  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.6217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:40.540928  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.596467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:40.641233  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.968593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:40.740542  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.228524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:40.841268  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.829885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:40.941273  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.684549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:41.039712  110787 httplog.go:90] GET /api/v1/namespaces/default: (1.746393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:41.040911  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.678863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:41.041360  110787 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.258356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:41.042971  110787 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.242227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:41.047194  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:41.048054  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:41.048199  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:41.048218  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:41.048368  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:41.048418  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:41.050500  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.758831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:41.050681  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:41.051331  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.016233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:41.056829  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:41.057563  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:41.057836  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:41.060708  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:41.141538  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (2.162108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:41.242791  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (3.157464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:41.341250  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.721826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:41.442383  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (3.003638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:41.541293  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.831052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:41.640791  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.366398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:41.740910  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.581743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:41.841007  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.599417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:41.940969  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.545427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:42.041074  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.518832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:42.047395  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:42.048230  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:42.048325  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:42.048340  110787 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:42.048450  110787 factory.go:557] Unable to schedule preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0813 02:50:42.048488  110787 factory.go:631] Updating pod condition for preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0813 02:50:42.050198  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (895.706µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:42.050234  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.289403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:42.050843  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:42.057054  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:42.057769  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:42.057962  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:42.060896  110787 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0813 02:50:42.141249  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.899596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:42.240558  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (1.332845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:42.242001  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (960.994µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:42.243080  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/waiting-pod: (811.504µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:42.247320  110787 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/waiting-pod: (3.94242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:42.249909  110787 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:42.249944  110787 scheduler.go:473] Skip schedule deleting pod: preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/preemptor-pod
I0813 02:50:42.251758  110787 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/events: (1.631094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0813 02:50:42.252118  110787 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (4.594526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:42.254818  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/waiting-pod: (1.014005ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:42.257407  110787 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/pods/preemptor-pod: (862.836µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:42.258745  110787 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=29199&timeout=7m14s&timeoutSeconds=434&watch=true: (1m1.216167935s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39614]
I0813 02:50:42.258807  110787 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=29197&timeout=7m28s&timeoutSeconds=448&watch=true: (1m1.211721944s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39622]
I0813 02:50:42.258824  110787 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=29198&timeout=6m33s&timeoutSeconds=393&watch=true: (1m1.225196079s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39390]
I0813 02:50:42.258919  110787 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=29198&timeout=8m38s&timeoutSeconds=518&watch=true: (1m1.225843741s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39604]
I0813 02:50:42.258936  110787 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=29198&timeout=6m25s&timeoutSeconds=385&watch=true: (1m1.22282695s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39606]
I0813 02:50:42.258829  110787 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=29207&timeout=8m49s&timeoutSeconds=529&watch=true: (1m1.216095521s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39618]
I0813 02:50:42.259074  110787 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=29198&timeout=7m18s&timeoutSeconds=438&watch=true: (1m1.217656089s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39616]
I0813 02:50:42.259138  110787 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=29208&timeout=6m36s&timeoutSeconds=396&watch=true: (1m1.220227353s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39610]
E0813 02:50:42.259137  110787 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0813 02:50:42.259200  110787 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=29208&timeout=5m25s&timeoutSeconds=325&watch=true: (1m1.226082645s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39600]
I0813 02:50:42.259255  110787 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=29208&timeout=7m39s&timeoutSeconds=459&watch=true: (1m1.225765131s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39392]
I0813 02:50:42.259329  110787 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=29208&timeout=7m35s&timeoutSeconds=455&watch=true: (1m1.221028488s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39608]
I0813 02:50:42.261749  110787 httplog.go:90] DELETE /api/v1/nodes: (3.559692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:42.261954  110787 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0813 02:50:42.263328  110787 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.141538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
I0813 02:50:42.265767  110787 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.349159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39620]
--- FAIL: TestPreemptWithPermitPlugin (64.70s)
    framework_test.go:1618: Expected the preemptor pod to be scheduled. error: timed out waiting for the condition
    framework_test.go:1622: Expected the waiting pod to get preempted and deleted

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190813-024238.xml

Find preempt-with-permit-plugincaed2a40-cefe-40f2-8dd4-68e9fea4fe91/waiting-pod mentions in log files | View test history on testgrid


Show 2469 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 756 lines ...
W0813 02:37:28.978] I0813 02:37:28.978400   53229 node_lifecycle_controller.go:418] Controller will reconcile labels.
W0813 02:37:28.978] I0813 02:37:28.978564   53229 node_lifecycle_controller.go:431] Controller will taint node by condition.
W0813 02:37:28.979] I0813 02:37:28.978744   53229 controllermanager.go:535] Started "nodelifecycle"
W0813 02:37:28.979] I0813 02:37:28.979246   53229 node_lifecycle_controller.go:455] Starting node controller
W0813 02:37:28.979] I0813 02:37:28.979284   53229 controller_utils.go:1029] Waiting for caches to sync for taint controller
W0813 02:37:28.981] I0813 02:37:28.980858   53229 node_lifecycle_controller.go:77] Sending events to api server
W0813 02:37:28.983] E0813 02:37:28.983322   53229 core.go:175] failed to start cloud node lifecycle controller: no cloud provider provided
W0813 02:37:28.983] W0813 02:37:28.983357   53229 controllermanager.go:527] Skipping "cloud-node-lifecycle"
W0813 02:37:28.984] I0813 02:37:28.984654   53229 controllermanager.go:535] Started "daemonset"
W0813 02:37:28.985] I0813 02:37:28.984726   53229 daemon_controller.go:267] Starting daemon sets controller
W0813 02:37:28.985] I0813 02:37:28.984748   53229 controller_utils.go:1029] Waiting for caches to sync for daemon sets controller
W0813 02:37:28.986] I0813 02:37:28.986029   53229 controllermanager.go:535] Started "cronjob"
W0813 02:37:28.986] I0813 02:37:28.986115   53229 cronjob_controller.go:96] Starting CronJob Manager
... skipping 26 lines ...
W0813 02:37:28.990] W0813 02:37:28.989684   53229 controllermanager.go:514] "tokencleaner" is disabled
W0813 02:37:28.990] I0813 02:37:28.989697   53229 controller_utils.go:1029] Waiting for caches to sync for deployment controller
W0813 02:37:28.990] I0813 02:37:28.989901   53229 controllermanager.go:535] Started "podgc"
W0813 02:37:28.990] W0813 02:37:28.989969   53229 controllermanager.go:514] "bootstrapsigner" is disabled
W0813 02:37:28.991] I0813 02:37:28.989987   53229 gc_controller.go:76] Starting GC controller
W0813 02:37:28.991] I0813 02:37:28.990026   53229 controller_utils.go:1029] Waiting for caches to sync for GC controller
W0813 02:37:28.991] E0813 02:37:28.990313   53229 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0813 02:37:28.991] W0813 02:37:28.990357   53229 controllermanager.go:527] Skipping "service"
W0813 02:37:28.991] I0813 02:37:28.990674   53229 controllermanager.go:535] Started "pvc-protection"
W0813 02:37:28.991] W0813 02:37:28.990746   53229 controllermanager.go:527] Skipping "root-ca-cert-publisher"
W0813 02:37:28.993] I0813 02:37:28.990712   53229 pvc_protection_controller.go:100] Starting PVC protection controller
W0813 02:37:28.994] I0813 02:37:28.993613   53229 controller_utils.go:1029] Waiting for caches to sync for PVC protection controller
W0813 02:37:29.087] I0813 02:37:29.087281   53229 controller_utils.go:1036] Caches are synced for certificate controller
... skipping 23 lines ...
I0813 02:37:29.215]   "buildDate": "2019-08-13T02:35:48Z",
I0813 02:37:29.215]   "goVersion": "go1.12.1",
I0813 02:37:29.215]   "compiler": "gc",
I0813 02:37:29.215]   "platform": "linux/amd64"
I0813 02:37:29.300] }+++ [0813 02:37:29] Testing kubectl version: check client only output matches expected output
W0813 02:37:29.401] I0813 02:37:29.212375   53229 controller_utils.go:1036] Caches are synced for stateful set controller
W0813 02:37:29.402] W0813 02:37:29.293133   53229 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0813 02:37:29.402] I0813 02:37:29.311119   53229 controller_utils.go:1036] Caches are synced for TTL controller
W0813 02:37:29.402] I0813 02:37:29.313828   53229 controller_utils.go:1036] Caches are synced for persistent volume controller
W0813 02:37:29.402] I0813 02:37:29.379533   53229 controller_utils.go:1036] Caches are synced for taint controller
W0813 02:37:29.402] I0813 02:37:29.379682   53229 node_lifecycle_controller.go:1189] Initializing eviction metric for zone: 
W0813 02:37:29.402] I0813 02:37:29.379762   53229 node_lifecycle_controller.go:1039] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0813 02:37:29.402] I0813 02:37:29.380078   53229 taint_manager.go:186] Starting NoExecuteTaintManager
W0813 02:37:29.403] I0813 02:37:29.380084   53229 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"3d2c0a97-c495-47ef-a04c-57e13c8b6b5c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W0813 02:37:29.403] I0813 02:37:29.385028   53229 controller_utils.go:1036] Caches are synced for daemon sets controller
W0813 02:37:29.403] I0813 02:37:29.389341   53229 controller_utils.go:1036] Caches are synced for attach detach controller
W0813 02:37:29.415] I0813 02:37:29.414450   53229 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0813 02:37:29.425] E0813 02:37:29.425070   53229 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0813 02:37:29.426] E0813 02:37:29.425518   53229 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0813 02:37:29.437] E0813 02:37:29.436997   53229 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0813 02:37:29.476] I0813 02:37:29.476424   53229 controller_utils.go:1036] Caches are synced for namespace controller
W0813 02:37:29.488] I0813 02:37:29.488296   53229 controller_utils.go:1036] Caches are synced for service account controller
W0813 02:37:29.491] I0813 02:37:29.491085   49759 controller.go:606] quota admission added evaluator for: serviceaccounts
W0813 02:37:29.511] I0813 02:37:29.511105   53229 controller_utils.go:1036] Caches are synced for disruption controller
W0813 02:37:29.512] I0813 02:37:29.511165   53229 disruption.go:341] Sending events to api server.
W0813 02:37:29.512] I0813 02:37:29.512101   53229 controller_utils.go:1036] Caches are synced for ReplicaSet controller
... skipping 65 lines ...
I0813 02:37:32.764] +++ working dir: /go/src/k8s.io/kubernetes
I0813 02:37:32.767] +++ command: run_RESTMapper_evaluation_tests
I0813 02:37:32.781] +++ [0813 02:37:32] Creating namespace namespace-1565663852-4587
I0813 02:37:32.860] namespace/namespace-1565663852-4587 created
I0813 02:37:32.935] Context "test" modified.
I0813 02:37:32.944] +++ [0813 02:37:32] Testing RESTMapper
I0813 02:37:33.063] +++ [0813 02:37:33] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0813 02:37:33.077] +++ exit code: 0
I0813 02:37:33.197] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0813 02:37:33.197] bindings                                                                      true         Binding
I0813 02:37:33.197] componentstatuses                 cs                                          false        ComponentStatus
I0813 02:37:33.197] configmaps                        cm                                          true         ConfigMap
I0813 02:37:33.198] endpoints                         ep                                          true         Endpoints
... skipping 643 lines ...
I0813 02:37:51.207] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0813 02:37:51.373] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0813 02:37:51.467] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0813 02:37:51.665] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0813 02:37:51.764] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0813 02:37:51.857] (Bpod "valid-pod" force deleted
W0813 02:37:51.958] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0813 02:37:51.958] error: setting 'all' parameter but found a non empty selector. 
W0813 02:37:51.959] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0813 02:37:52.060] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:37:52.071] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0813 02:37:52.151] (Bnamespace/test-kubectl-describe-pod created
I0813 02:37:52.267] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0813 02:37:52.367] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0813 02:37:53.467] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0813 02:37:53.546] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0813 02:37:53.645] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0813 02:37:53.811] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:37:54.008] (Bpod/env-test-pod created
W0813 02:37:54.109] I0813 02:37:53.002344   49759 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0813 02:37:54.110] error: min-available and max-unavailable cannot be both specified
I0813 02:37:54.211] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0813 02:37:54.211] Name:         env-test-pod
I0813 02:37:54.212] Namespace:    test-kubectl-describe-pod
I0813 02:37:54.212] Priority:     0
I0813 02:37:54.213] Node:         <none>
I0813 02:37:54.213] Labels:       <none>
... skipping 173 lines ...
I0813 02:38:09.153] (Bpod/valid-pod patched
I0813 02:38:09.318] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0813 02:38:09.446] (Bpod/valid-pod patched
I0813 02:38:09.600] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0813 02:38:09.868] (Bpod/valid-pod patched
I0813 02:38:10.036] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0813 02:38:10.341] (B+++ [0813 02:38:10] "kubectl patch with resourceVersion 497" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0813 02:38:10.709] pod "valid-pod" deleted
I0813 02:38:10.721] pod/valid-pod replaced
I0813 02:38:10.826] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0813 02:38:11.000] (BSuccessful
I0813 02:38:11.000] message:error: --grace-period must have --force specified
I0813 02:38:11.001] has:\-\-grace-period must have \-\-force specified
I0813 02:38:11.165] Successful
I0813 02:38:11.166] message:error: --timeout must have --force specified
I0813 02:38:11.166] has:\-\-timeout must have \-\-force specified
W0813 02:38:11.323] W0813 02:38:11.322284   53229 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0813 02:38:11.424] node/node-v1-test created
I0813 02:38:11.508] node/node-v1-test replaced
I0813 02:38:11.616] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0813 02:38:11.705] (Bnode "node-v1-test" deleted
I0813 02:38:11.815] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0813 02:38:12.098] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 36 lines ...
I0813 02:38:14.661] (Bpod/redis-master created
I0813 02:38:14.665] pod/valid-pod created
W0813 02:38:14.766] Edit cancelled, no changes made.
W0813 02:38:14.767] Edit cancelled, no changes made.
W0813 02:38:14.767] Edit cancelled, no changes made.
W0813 02:38:14.767] Edit cancelled, no changes made.
W0813 02:38:14.767] error: 'name' already has a value (valid-pod), and --overwrite is false
W0813 02:38:14.767] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0813 02:38:14.868] core.sh:614: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0813 02:38:14.870] (Bcore.sh:618: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0813 02:38:14.952] (Bpod "redis-master" deleted
I0813 02:38:14.957] pod "valid-pod" deleted
I0813 02:38:15.060] core.sh:622: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 72 lines ...
I0813 02:38:21.497] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0813 02:38:21.499] +++ working dir: /go/src/k8s.io/kubernetes
I0813 02:38:21.502] +++ command: run_kubectl_create_error_tests
I0813 02:38:21.514] +++ [0813 02:38:21] Creating namespace namespace-1565663901-30738
I0813 02:38:21.594] namespace/namespace-1565663901-30738 created
I0813 02:38:21.667] Context "test" modified.
I0813 02:38:21.676] +++ [0813 02:38:21] Testing kubectl create with error
W0813 02:38:21.776] Error: must specify one of -f and -k
W0813 02:38:21.777] 
W0813 02:38:21.777] Create a resource from a file or from stdin.
W0813 02:38:21.777] 
W0813 02:38:21.777]  JSON and YAML formats are accepted.
W0813 02:38:21.777] 
W0813 02:38:21.777] Examples:
... skipping 41 lines ...
W0813 02:38:21.783] 
W0813 02:38:21.783] Usage:
W0813 02:38:21.783]   kubectl create -f FILENAME [options]
W0813 02:38:21.783] 
W0813 02:38:21.783] Use "kubectl <command> --help" for more information about a given command.
W0813 02:38:21.783] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0813 02:38:21.935] +++ [0813 02:38:21] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0813 02:38:22.036] kubectl convert is DEPRECATED and will be removed in a future version.
W0813 02:38:22.036] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0813 02:38:22.137] +++ exit code: 0
I0813 02:38:22.156] Recording: run_kubectl_apply_tests
I0813 02:38:22.157] Running command: run_kubectl_apply_tests
I0813 02:38:22.179] 
... skipping 19 lines ...
W0813 02:38:24.585] I0813 02:38:24.584810   49759 client.go:354] parsed scheme: ""
W0813 02:38:24.586] I0813 02:38:24.584853   49759 client.go:354] scheme "" not registered, fallback to default scheme
W0813 02:38:24.586] I0813 02:38:24.584907   49759 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0813 02:38:24.586] I0813 02:38:24.585001   49759 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0813 02:38:24.586] I0813 02:38:24.585675   49759 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0813 02:38:24.588] I0813 02:38:24.588402   49759 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0813 02:38:24.687] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0813 02:38:24.789] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0813 02:38:24.789] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0813 02:38:24.811] +++ exit code: 0
I0813 02:38:24.849] Recording: run_kubectl_run_tests
I0813 02:38:24.850] Running command: run_kubectl_run_tests
I0813 02:38:24.872] 
... skipping 97 lines ...
I0813 02:38:27.653] Context "test" modified.
I0813 02:38:27.662] +++ [0813 02:38:27] Testing kubectl create filter
I0813 02:38:27.755] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:38:27.932] (Bpod/selector-test-pod created
I0813 02:38:28.042] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0813 02:38:28.137] (BSuccessful
I0813 02:38:28.137] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0813 02:38:28.137] has:pods "selector-test-pod-dont-apply" not found
I0813 02:38:28.219] pod "selector-test-pod" deleted
I0813 02:38:28.241] +++ exit code: 0
I0813 02:38:28.276] Recording: run_kubectl_apply_deployments_tests
I0813 02:38:28.277] Running command: run_kubectl_apply_deployments_tests
I0813 02:38:28.298] 
... skipping 29 lines ...
W0813 02:38:30.820] I0813 02:38:30.723235   53229 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565663908-19227", Name:"nginx", UID:"570ea22b-b071-4893-b2f9-6e733267cbdf", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7dbc4d9f to 3
W0813 02:38:30.820] I0813 02:38:30.728798   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663908-19227", Name:"nginx-7dbc4d9f", UID:"d5eb3652-2c6f-4240-9326-8fa18ccce116", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-hwj67
W0813 02:38:30.821] I0813 02:38:30.731524   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663908-19227", Name:"nginx-7dbc4d9f", UID:"d5eb3652-2c6f-4240-9326-8fa18ccce116", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-26d7p
W0813 02:38:30.821] I0813 02:38:30.733689   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663908-19227", Name:"nginx-7dbc4d9f", UID:"d5eb3652-2c6f-4240-9326-8fa18ccce116", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-bfz54
I0813 02:38:30.922] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0813 02:38:35.101] (BSuccessful
I0813 02:38:35.102] message:Error from server (Conflict): error when applying patch:
I0813 02:38:35.102] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565663908-19227\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0813 02:38:35.102] to:
I0813 02:38:35.102] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0813 02:38:35.103] Name: "nginx", Namespace: "namespace-1565663908-19227"
I0813 02:38:35.105] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565663908-19227\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-08-13T02:38:30Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]] "k:{\"type\":\"Progressing\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-13T02:38:30Z"] map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map["f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-08-13T02:38:30Z"]] "name":"nginx" "namespace":"namespace-1565663908-19227" "resourceVersion":"594" "selfLink":"/apis/apps/v1/namespaces/namespace-1565663908-19227/deployments/nginx" "uid":"570ea22b-b071-4893-b2f9-6e733267cbdf"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-08-13T02:38:30Z" "lastUpdateTime":"2019-08-13T02:38:30Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-08-13T02:38:30Z" "lastUpdateTime":"2019-08-13T02:38:30Z" "message":"ReplicaSet \"nginx-7dbc4d9f\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0813 02:38:35.106] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0813 02:38:35.106] has:Error from server (Conflict)
W0813 02:38:35.890] I0813 02:38:35.890171   53229 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1565663898-14917
W0813 02:38:39.418] E0813 02:38:39.417463   53229 replica_set.go:450] Sync "namespace-1565663908-19227/nginx-7dbc4d9f" failed with replicasets.apps "nginx-7dbc4d9f" not found
I0813 02:38:40.363] deployment.apps/nginx configured
W0813 02:38:40.464] I0813 02:38:40.368451   53229 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565663908-19227", Name:"nginx", UID:"517a2c4e-4bbd-472f-b631-547948ecde18", APIVersion:"apps/v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0813 02:38:40.465] I0813 02:38:40.374715   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663908-19227", Name:"nginx-594f77b9f6", UID:"78b9d40d-61ea-41c0-aa3d-83fa4bb518a5", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-2rfk2
W0813 02:38:40.465] I0813 02:38:40.378724   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663908-19227", Name:"nginx-594f77b9f6", UID:"78b9d40d-61ea-41c0-aa3d-83fa4bb518a5", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-zh45m
W0813 02:38:40.466] I0813 02:38:40.381064   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663908-19227", Name:"nginx-594f77b9f6", UID:"78b9d40d-61ea-41c0-aa3d-83fa4bb518a5", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-wtpdk
I0813 02:38:40.566] Successful
... skipping 168 lines ...
I0813 02:38:47.836] +++ [0813 02:38:47] Creating namespace namespace-1565663927-21920
I0813 02:38:47.912] namespace/namespace-1565663927-21920 created
I0813 02:38:47.984] Context "test" modified.
I0813 02:38:47.995] +++ [0813 02:38:47] Testing kubectl get
I0813 02:38:48.091] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:38:48.182] (BSuccessful
I0813 02:38:48.182] message:Error from server (NotFound): pods "abc" not found
I0813 02:38:48.183] has:pods "abc" not found
I0813 02:38:48.273] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:38:48.361] (BSuccessful
I0813 02:38:48.361] message:Error from server (NotFound): pods "abc" not found
I0813 02:38:48.362] has:pods "abc" not found
I0813 02:38:48.455] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:38:48.541] (BSuccessful
I0813 02:38:48.542] message:{
I0813 02:38:48.542]     "apiVersion": "v1",
I0813 02:38:48.542]     "items": [],
... skipping 23 lines ...
I0813 02:38:48.892] has not:No resources found
I0813 02:38:48.975] Successful
I0813 02:38:48.975] message:NAME
I0813 02:38:48.975] has not:No resources found
I0813 02:38:49.072] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:38:49.170] (BSuccessful
I0813 02:38:49.171] message:error: the server doesn't have a resource type "foobar"
I0813 02:38:49.171] has not:No resources found
I0813 02:38:49.269] Successful
I0813 02:38:49.270] message:No resources found in namespace-1565663927-21920 namespace.
I0813 02:38:49.270] has:No resources found
I0813 02:38:49.367] Successful
I0813 02:38:49.367] message:
I0813 02:38:49.367] has not:No resources found
I0813 02:38:49.451] Successful
I0813 02:38:49.452] message:No resources found in namespace-1565663927-21920 namespace.
I0813 02:38:49.452] has:No resources found
I0813 02:38:49.541] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:38:49.638] (BSuccessful
I0813 02:38:49.639] message:Error from server (NotFound): pods "abc" not found
I0813 02:38:49.639] has:pods "abc" not found
I0813 02:38:49.641] FAIL!
I0813 02:38:49.641] message:Error from server (NotFound): pods "abc" not found
I0813 02:38:49.641] has not:List
I0813 02:38:49.641] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0813 02:38:49.757] Successful
I0813 02:38:49.758] message:I0813 02:38:49.709192   63819 loader.go:375] Config loaded from file:  /tmp/tmp.fhqjrgDgsb/.kube/config
I0813 02:38:49.758] I0813 02:38:49.710720   63819 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0813 02:38:49.759] I0813 02:38:49.733454   63819 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0813 02:38:55.354] Successful
I0813 02:38:55.354] message:NAME    DATA   AGE
I0813 02:38:55.354] one     0      0s
I0813 02:38:55.354] three   0      0s
I0813 02:38:55.355] two     0      0s
I0813 02:38:55.355] STATUS    REASON          MESSAGE
I0813 02:38:55.355] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0813 02:38:55.355] has not:watch is only supported on individual resources
I0813 02:38:56.442] Successful
I0813 02:38:56.442] message:STATUS    REASON          MESSAGE
I0813 02:38:56.443] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0813 02:38:56.443] has not:watch is only supported on individual resources
I0813 02:38:56.450] +++ [0813 02:38:56] Creating namespace namespace-1565663936-11348
I0813 02:38:56.530] namespace/namespace-1565663936-11348 created
I0813 02:38:56.610] Context "test" modified.
I0813 02:38:56.710] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:38:56.883] (Bpod/valid-pod created
... skipping 104 lines ...
I0813 02:38:56.993] }
I0813 02:38:57.083] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0813 02:38:57.344] (B<no value>Successful
I0813 02:38:57.345] message:valid-pod:
I0813 02:38:57.345] has:valid-pod:
I0813 02:38:57.434] Successful
I0813 02:38:57.434] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0813 02:38:57.434] 	template was:
I0813 02:38:57.435] 		{.missing}
I0813 02:38:57.435] 	object given to jsonpath engine was:
I0813 02:38:57.438] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-08-13T02:38:56Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-08-13T02:38:56Z"}}, "name":"valid-pod", "namespace":"namespace-1565663936-11348", "resourceVersion":"693", "selfLink":"/api/v1/namespaces/namespace-1565663936-11348/pods/valid-pod", "uid":"21763681-0651-4ddd-ad59-de82a40ff7bf"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0813 02:38:57.438] has:missing is not found
I0813 02:38:57.528] Successful
I0813 02:38:57.529] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0813 02:38:57.529] 	template was:
I0813 02:38:57.529] 		{{.missing}}
I0813 02:38:57.529] 	raw data was:
I0813 02:38:57.531] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-08-13T02:38:56Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-08-13T02:38:56Z"}],"name":"valid-pod","namespace":"namespace-1565663936-11348","resourceVersion":"693","selfLink":"/api/v1/namespaces/namespace-1565663936-11348/pods/valid-pod","uid":"21763681-0651-4ddd-ad59-de82a40ff7bf"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0813 02:38:57.531] 	object given to template engine was:
I0813 02:38:57.532] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-08-13T02:38:56Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-08-13T02:38:56Z]] name:valid-pod namespace:namespace-1565663936-11348 resourceVersion:693 selfLink:/api/v1/namespaces/namespace-1565663936-11348/pods/valid-pod uid:21763681-0651-4ddd-ad59-de82a40ff7bf] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0813 02:38:57.532] has:map has no entry for key "missing"
W0813 02:38:57.632] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0813 02:38:58.614] Successful
I0813 02:38:58.614] message:NAME        READY   STATUS    RESTARTS   AGE
I0813 02:38:58.614] valid-pod   0/1     Pending   0          1s
I0813 02:38:58.614] STATUS      REASON          MESSAGE
I0813 02:38:58.614] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0813 02:38:58.614] has:STATUS
I0813 02:38:58.615] Successful
I0813 02:38:58.615] message:NAME        READY   STATUS    RESTARTS   AGE
I0813 02:38:58.615] valid-pod   0/1     Pending   0          1s
I0813 02:38:58.615] STATUS      REASON          MESSAGE
I0813 02:38:58.616] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0813 02:38:58.616] has:valid-pod
I0813 02:38:59.706] Successful
I0813 02:38:59.706] message:pod/valid-pod
I0813 02:38:59.707] has not:STATUS
I0813 02:38:59.708] Successful
I0813 02:38:59.708] message:pod/valid-pod
... skipping 144 lines ...
I0813 02:39:00.819] status:
I0813 02:39:00.819]   phase: Pending
I0813 02:39:00.819]   qosClass: Guaranteed
I0813 02:39:00.819] ---
I0813 02:39:00.819] has:name: valid-pod
I0813 02:39:00.888] Successful
I0813 02:39:00.888] message:Error from server (NotFound): pods "invalid-pod" not found
I0813 02:39:00.889] has:"invalid-pod" not found
I0813 02:39:00.976] pod "valid-pod" deleted
I0813 02:39:01.082] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:39:01.247] (Bpod/redis-master created
I0813 02:39:01.252] pod/valid-pod created
I0813 02:39:01.345] Successful
... skipping 35 lines ...
I0813 02:39:02.539] +++ command: run_kubectl_exec_pod_tests
I0813 02:39:02.553] +++ [0813 02:39:02] Creating namespace namespace-1565663942-17921
I0813 02:39:02.633] namespace/namespace-1565663942-17921 created
I0813 02:39:02.709] Context "test" modified.
I0813 02:39:02.716] +++ [0813 02:39:02] Testing kubectl exec POD COMMAND
I0813 02:39:02.796] Successful
I0813 02:39:02.797] message:Error from server (NotFound): pods "abc" not found
I0813 02:39:02.797] has:pods "abc" not found
I0813 02:39:02.955] pod/test-pod created
I0813 02:39:03.055] Successful
I0813 02:39:03.056] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0813 02:39:03.056] has not:pods "test-pod" not found
I0813 02:39:03.058] Successful
I0813 02:39:03.058] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0813 02:39:03.059] has not:pod or type/name must be specified
I0813 02:39:03.134] pod "test-pod" deleted
I0813 02:39:03.158] +++ exit code: 0
I0813 02:39:03.199] Recording: run_kubectl_exec_resource_name_tests
I0813 02:39:03.199] Running command: run_kubectl_exec_resource_name_tests
I0813 02:39:03.224] 
... skipping 2 lines ...
I0813 02:39:03.232] +++ command: run_kubectl_exec_resource_name_tests
I0813 02:39:03.247] +++ [0813 02:39:03] Creating namespace namespace-1565663943-17699
I0813 02:39:03.323] namespace/namespace-1565663943-17699 created
I0813 02:39:03.393] Context "test" modified.
I0813 02:39:03.401] +++ [0813 02:39:03] Testing kubectl exec TYPE/NAME COMMAND
I0813 02:39:03.501] Successful
I0813 02:39:03.502] message:error: the server doesn't have a resource type "foo"
I0813 02:39:03.502] has:error:
I0813 02:39:03.587] Successful
I0813 02:39:03.587] message:Error from server (NotFound): deployments.apps "bar" not found
I0813 02:39:03.588] has:"bar" not found
I0813 02:39:03.743] pod/test-pod created
I0813 02:39:03.905] replicaset.apps/frontend created
W0813 02:39:04.006] I0813 02:39:03.910791   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663943-17699", Name:"frontend", UID:"8f7fbd92-3efa-46fe-8b0e-6ebb7d44effd", APIVersion:"apps/v1", ResourceVersion:"746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-s9wqn
W0813 02:39:04.007] I0813 02:39:03.914661   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663943-17699", Name:"frontend", UID:"8f7fbd92-3efa-46fe-8b0e-6ebb7d44effd", APIVersion:"apps/v1", ResourceVersion:"746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6z4sf
W0813 02:39:04.008] I0813 02:39:03.915334   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663943-17699", Name:"frontend", UID:"8f7fbd92-3efa-46fe-8b0e-6ebb7d44effd", APIVersion:"apps/v1", ResourceVersion:"746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xhg8t
I0813 02:39:04.108] configmap/test-set-env-config created
I0813 02:39:04.177] Successful
I0813 02:39:04.178] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0813 02:39:04.178] has:not implemented
I0813 02:39:04.274] Successful
I0813 02:39:04.274] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0813 02:39:04.274] has not:not found
I0813 02:39:04.275] Successful
I0813 02:39:04.276] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0813 02:39:04.276] has not:pod or type/name must be specified
I0813 02:39:04.381] Successful
I0813 02:39:04.381] message:Error from server (BadRequest): pod frontend-6z4sf does not have a host assigned
I0813 02:39:04.382] has not:not found
I0813 02:39:04.383] Successful
I0813 02:39:04.383] message:Error from server (BadRequest): pod frontend-6z4sf does not have a host assigned
I0813 02:39:04.384] has not:pod or type/name must be specified
I0813 02:39:04.461] pod "test-pod" deleted
I0813 02:39:04.540] replicaset.apps "frontend" deleted
I0813 02:39:04.631] configmap "test-set-env-config" deleted
I0813 02:39:04.651] +++ exit code: 0
I0813 02:39:04.691] Recording: run_create_secret_tests
I0813 02:39:04.692] Running command: run_create_secret_tests
I0813 02:39:04.715] 
I0813 02:39:04.717] +++ Running case: test-cmd.run_create_secret_tests 
I0813 02:39:04.719] +++ working dir: /go/src/k8s.io/kubernetes
I0813 02:39:04.722] +++ command: run_create_secret_tests
I0813 02:39:04.819] Successful
I0813 02:39:04.819] message:Error from server (NotFound): secrets "mysecret" not found
I0813 02:39:04.819] has:secrets "mysecret" not found
I0813 02:39:04.982] Successful
I0813 02:39:04.983] message:Error from server (NotFound): secrets "mysecret" not found
I0813 02:39:04.983] has:secrets "mysecret" not found
I0813 02:39:04.984] Successful
I0813 02:39:04.985] message:user-specified
I0813 02:39:04.985] has:user-specified
I0813 02:39:05.056] Successful
I0813 02:39:05.137] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"cdf85b52-e87f-4276-a569-317700d3b8e6","resourceVersion":"767","creationTimestamp":"2019-08-13T02:39:05Z"}}
... skipping 2 lines ...
I0813 02:39:05.303] has:uid
I0813 02:39:05.377] Successful
I0813 02:39:05.378] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"cdf85b52-e87f-4276-a569-317700d3b8e6","resourceVersion":"768","creationTimestamp":"2019-08-13T02:39:05Z","managedFields":[{"manager":"kubectl","operation":"Update","apiVersion":"v1","time":"2019-08-13T02:39:05Z","fields":{"f:data":{"f:key1":{},".":{}}}}]},"data":{"key1":"config1"}}
I0813 02:39:05.378] has:config1
I0813 02:39:05.443] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"cdf85b52-e87f-4276-a569-317700d3b8e6"}}
I0813 02:39:05.531] Successful
I0813 02:39:05.531] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0813 02:39:05.532] has:configmaps "tester-update-cm" not found
I0813 02:39:05.546] +++ exit code: 0
I0813 02:39:05.584] Recording: run_kubectl_create_kustomization_directory_tests
I0813 02:39:05.584] Running command: run_kubectl_create_kustomization_directory_tests
I0813 02:39:05.607] 
I0813 02:39:05.609] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 158 lines ...
I0813 02:39:08.351] valid-pod   0/1     Pending   0          0s
I0813 02:39:08.351] has:valid-pod
I0813 02:39:09.436] Successful
I0813 02:39:09.436] message:NAME        READY   STATUS    RESTARTS   AGE
I0813 02:39:09.436] valid-pod   0/1     Pending   0          0s
I0813 02:39:09.436] STATUS      REASON          MESSAGE
I0813 02:39:09.436] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0813 02:39:09.436] has:Timeout exceeded while reading body
I0813 02:39:09.520] Successful
I0813 02:39:09.521] message:NAME        READY   STATUS    RESTARTS   AGE
I0813 02:39:09.521] valid-pod   0/1     Pending   0          1s
I0813 02:39:09.521] has:valid-pod
I0813 02:39:09.596] Successful
I0813 02:39:09.597] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0813 02:39:09.597] has:Invalid timeout value
I0813 02:39:09.679] pod "valid-pod" deleted
I0813 02:39:09.702] +++ exit code: 0
I0813 02:39:09.741] Recording: run_crd_tests
I0813 02:39:09.742] Running command: run_crd_tests
I0813 02:39:09.764] 
... skipping 244 lines ...
I0813 02:39:14.526] foo.company.com/test patched
I0813 02:39:14.624] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0813 02:39:14.708] (Bfoo.company.com/test patched
I0813 02:39:14.801] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0813 02:39:14.882] (Bfoo.company.com/test patched
I0813 02:39:14.978] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0813 02:39:15.145] (B+++ [0813 02:39:15] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0813 02:39:15.208] {
I0813 02:39:15.209]     "apiVersion": "company.com/v1",
I0813 02:39:15.209]     "kind": "Foo",
I0813 02:39:15.209]     "metadata": {
I0813 02:39:15.209]         "annotations": {
I0813 02:39:15.210]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 355 lines ...
I0813 02:39:37.717] (Bnamespace/non-native-resources created
I0813 02:39:37.898] bar.company.com/test created
I0813 02:39:38.003] crd.sh:455: Successful get bars {{len .items}}: 1
I0813 02:39:38.087] (Bnamespace "non-native-resources" deleted
I0813 02:39:43.308] crd.sh:458: Successful get bars {{len .items}}: 0
I0813 02:39:43.484] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0813 02:39:43.585] Error from server (NotFound): namespaces "non-native-resources" not found
I0813 02:39:43.686] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0813 02:39:43.696] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0813 02:39:43.830] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0813 02:39:43.871] +++ exit code: 0
I0813 02:39:43.906] Recording: run_cmd_with_img_tests
I0813 02:39:43.907] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0813 02:39:44.231] I0813 02:39:44.230299   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663983-2740", Name:"test1-9797f89d8", UID:"b69bbed4-4efb-45d9-8234-464b1c364ed2", APIVersion:"apps/v1", ResourceVersion:"923", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-9797f89d8-cb68m
I0813 02:39:44.332] Successful
I0813 02:39:44.335] message:deployment.apps/test1 created
I0813 02:39:44.335] has:deployment.apps/test1 created
I0813 02:39:44.335] deployment.apps "test1" deleted
I0813 02:39:44.414] Successful
I0813 02:39:44.414] message:error: Invalid image name "InvalidImageName": invalid reference format
I0813 02:39:44.415] has:error: Invalid image name "InvalidImageName": invalid reference format
I0813 02:39:44.427] +++ exit code: 0
I0813 02:39:44.466] +++ [0813 02:39:44] Testing recursive resources
I0813 02:39:44.471] +++ [0813 02:39:44] Creating namespace namespace-1565663984-20054
I0813 02:39:44.549] namespace/namespace-1565663984-20054 created
I0813 02:39:44.622] Context "test" modified.
I0813 02:39:44.720] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:39:45.036] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:45.038] (BSuccessful
I0813 02:39:45.038] message:pod/busybox0 created
I0813 02:39:45.038] pod/busybox1 created
I0813 02:39:45.039] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0813 02:39:45.039] has:error validating data: kind not set
I0813 02:39:45.132] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:45.320] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0813 02:39:45.322] (BSuccessful
I0813 02:39:45.323] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 02:39:45.323] has:Object 'Kind' is missing
I0813 02:39:45.425] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:45.727] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0813 02:39:45.729] (BSuccessful
I0813 02:39:45.730] message:pod/busybox0 replaced
I0813 02:39:45.730] pod/busybox1 replaced
I0813 02:39:45.730] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0813 02:39:45.730] has:error validating data: kind not set
W0813 02:39:45.831] W0813 02:39:44.496394   49759 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0813 02:39:45.831] E0813 02:39:44.498109   53229 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:45.831] W0813 02:39:44.603747   49759 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0813 02:39:45.831] E0813 02:39:44.605680   53229 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:45.832] W0813 02:39:44.706816   49759 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0813 02:39:45.832] E0813 02:39:44.708738   53229 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:45.832] W0813 02:39:44.851189   49759 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0813 02:39:45.832] E0813 02:39:44.853393   53229 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:45.833] E0813 02:39:45.499917   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:45.833] E0813 02:39:45.606919   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:45.833] E0813 02:39:45.710564   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:45.855] E0813 02:39:45.855244   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:39:45.956] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:45.973] (BSuccessful
I0813 02:39:45.974] message:Name:         busybox0
I0813 02:39:45.974] Namespace:    namespace-1565663984-20054
I0813 02:39:45.974] Priority:     0
I0813 02:39:45.974] Node:         <none>
... skipping 159 lines ...
I0813 02:39:45.999] has:Object 'Kind' is missing
I0813 02:39:46.106] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:46.304] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0813 02:39:46.307] (BSuccessful
I0813 02:39:46.307] message:pod/busybox0 annotated
I0813 02:39:46.307] pod/busybox1 annotated
I0813 02:39:46.308] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 02:39:46.308] has:Object 'Kind' is missing
I0813 02:39:46.410] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:46.721] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0813 02:39:46.724] (BSuccessful
I0813 02:39:46.725] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0813 02:39:46.725] pod/busybox0 configured
I0813 02:39:46.725] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0813 02:39:46.725] pod/busybox1 configured
I0813 02:39:46.725] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0813 02:39:46.726] has:error validating data: kind not set
I0813 02:39:46.820] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:39:46.989] (Bdeployment.apps/nginx created
I0813 02:39:47.093] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0813 02:39:47.188] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0813 02:39:47.373] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I0813 02:39:47.376] (BSuccessful
... skipping 42 lines ...
I0813 02:39:47.460] deployment.apps "nginx" deleted
I0813 02:39:47.563] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:47.736] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:47.739] (BSuccessful
I0813 02:39:47.740] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0813 02:39:47.740] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0813 02:39:47.740] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 02:39:47.740] has:Object 'Kind' is missing
I0813 02:39:47.840] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:47.931] (BSuccessful
I0813 02:39:47.932] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 02:39:47.932] has:busybox0:busybox1:
I0813 02:39:47.934] Successful
I0813 02:39:47.934] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 02:39:47.934] has:Object 'Kind' is missing
I0813 02:39:48.030] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:48.126] (Bpod/busybox0 labeled
I0813 02:39:48.127] pod/busybox1 labeled
I0813 02:39:48.127] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 02:39:48.222] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0813 02:39:48.224] (BSuccessful
I0813 02:39:48.224] message:pod/busybox0 labeled
I0813 02:39:48.225] pod/busybox1 labeled
I0813 02:39:48.225] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 02:39:48.226] has:Object 'Kind' is missing
I0813 02:39:48.327] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:48.413] (Bpod/busybox0 patched
I0813 02:39:48.413] pod/busybox1 patched
I0813 02:39:48.413] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 02:39:48.507] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0813 02:39:48.510] (BSuccessful
I0813 02:39:48.510] message:pod/busybox0 patched
I0813 02:39:48.510] pod/busybox1 patched
I0813 02:39:48.510] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 02:39:48.511] has:Object 'Kind' is missing
I0813 02:39:48.610] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:48.790] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:39:48.793] (BSuccessful
I0813 02:39:48.793] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0813 02:39:48.793] pod "busybox0" force deleted
I0813 02:39:48.794] pod "busybox1" force deleted
I0813 02:39:48.794] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0813 02:39:48.794] has:Object 'Kind' is missing
I0813 02:39:48.884] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:39:49.051] (Breplicationcontroller/busybox0 created
I0813 02:39:49.055] replicationcontroller/busybox1 created
W0813 02:39:49.155] E0813 02:39:46.501350   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:49.156] E0813 02:39:46.608857   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:49.156] E0813 02:39:46.712764   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:49.157] E0813 02:39:46.856728   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:49.157] I0813 02:39:46.995776   53229 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565663984-20054", Name:"nginx", UID:"0884d19c-0689-4026-b3cf-289c34f270cb", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0813 02:39:49.157] I0813 02:39:47.004089   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663984-20054", Name:"nginx-bbbbb95b5", UID:"1c7ccb63-1da1-4892-b47b-cc65df640d21", APIVersion:"apps/v1", ResourceVersion:"949", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-nb5tr
W0813 02:39:49.158] I0813 02:39:47.007262   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663984-20054", Name:"nginx-bbbbb95b5", UID:"1c7ccb63-1da1-4892-b47b-cc65df640d21", APIVersion:"apps/v1", ResourceVersion:"949", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-l96jq
W0813 02:39:49.158] I0813 02:39:47.007506   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663984-20054", Name:"nginx-bbbbb95b5", UID:"1c7ccb63-1da1-4892-b47b-cc65df640d21", APIVersion:"apps/v1", ResourceVersion:"949", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-q6msx
W0813 02:39:49.158] kubectl convert is DEPRECATED and will be removed in a future version.
W0813 02:39:49.158] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0813 02:39:49.158] E0813 02:39:47.502897   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:49.159] E0813 02:39:47.610491   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:49.159] E0813 02:39:47.714948   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:49.159] E0813 02:39:47.858346   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:49.159] E0813 02:39:48.504238   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:49.159] E0813 02:39:48.612467   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:49.160] E0813 02:39:48.716555   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:49.160] E0813 02:39:48.859469   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:49.160] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0813 02:39:49.160] I0813 02:39:49.055348   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565663984-20054", Name:"busybox0", UID:"85c79223-4a4a-45bf-bc18-6afc0b0e221b", APIVersion:"v1", ResourceVersion:"979", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-tcj2l
W0813 02:39:49.161] I0813 02:39:49.060179   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565663984-20054", Name:"busybox1", UID:"edc3698d-9970-4705-b58b-536acbd8533b", APIVersion:"v1", ResourceVersion:"981", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-th84t
I0813 02:39:49.261] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:49.262] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:49.364] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0813 02:39:49.465] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0813 02:39:49.653] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0813 02:39:49.749] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0813 02:39:49.752] (BSuccessful
I0813 02:39:49.752] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0813 02:39:49.752] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0813 02:39:49.753] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 02:39:49.753] has:Object 'Kind' is missing
I0813 02:39:49.839] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0813 02:39:49.928] horizontalpodautoscaler.autoscaling "busybox1" deleted
W0813 02:39:50.029] E0813 02:39:49.506327   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:50.029] E0813 02:39:49.613853   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:50.029] E0813 02:39:49.718164   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:50.030] E0813 02:39:49.861265   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:39:50.130] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:50.136] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0813 02:39:50.231] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0813 02:39:50.441] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0813 02:39:50.540] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0813 02:39:50.543] (BSuccessful
I0813 02:39:50.547] message:service/busybox0 exposed
I0813 02:39:50.547] service/busybox1 exposed
I0813 02:39:50.548] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 02:39:50.548] has:Object 'Kind' is missing
I0813 02:39:50.653] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:50.749] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0813 02:39:50.847] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0813 02:39:51.056] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0813 02:39:51.149] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0813 02:39:51.151] (BSuccessful
I0813 02:39:51.152] message:replicationcontroller/busybox0 scaled
I0813 02:39:51.152] replicationcontroller/busybox1 scaled
I0813 02:39:51.152] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 02:39:51.152] has:Object 'Kind' is missing
I0813 02:39:51.245] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:51.439] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:39:51.441] (BSuccessful
I0813 02:39:51.441] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0813 02:39:51.441] replicationcontroller "busybox0" force deleted
I0813 02:39:51.442] replicationcontroller "busybox1" force deleted
I0813 02:39:51.442] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 02:39:51.442] has:Object 'Kind' is missing
I0813 02:39:51.539] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:39:51.716] (Bdeployment.apps/nginx1-deployment created
I0813 02:39:51.722] deployment.apps/nginx0-deployment created
W0813 02:39:51.823] E0813 02:39:50.508475   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:51.824] E0813 02:39:50.615672   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:51.825] E0813 02:39:50.720066   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:51.825] E0813 02:39:50.862501   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:51.826] I0813 02:39:50.943129   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565663984-20054", Name:"busybox0", UID:"85c79223-4a4a-45bf-bc18-6afc0b0e221b", APIVersion:"v1", ResourceVersion:"1000", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-57hcs
W0813 02:39:51.826] I0813 02:39:50.951098   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565663984-20054", Name:"busybox1", UID:"edc3698d-9970-4705-b58b-536acbd8533b", APIVersion:"v1", ResourceVersion:"1003", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-6pbn7
W0813 02:39:51.826] E0813 02:39:51.510019   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:51.827] E0813 02:39:51.617139   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:51.827] E0813 02:39:51.722015   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:51.827] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0813 02:39:51.827] I0813 02:39:51.722394   53229 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565663984-20054", Name:"nginx1-deployment", UID:"0485ac69-583f-45e8-be49-d6216af41fb5", APIVersion:"apps/v1", ResourceVersion:"1021", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-84f7f49fb7 to 2
W0813 02:39:51.828] I0813 02:39:51.727050   53229 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565663984-20054", Name:"nginx0-deployment", UID:"cac2771d-e4a8-4e5b-9dce-5dcafba32984", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57475bf54d to 2
W0813 02:39:51.828] I0813 02:39:51.727992   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663984-20054", Name:"nginx1-deployment-84f7f49fb7", UID:"0804e6e5-72a9-476d-96ef-7276a3f375a0", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-sm8fl
W0813 02:39:51.828] I0813 02:39:51.732601   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663984-20054", Name:"nginx1-deployment-84f7f49fb7", UID:"0804e6e5-72a9-476d-96ef-7276a3f375a0", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-wrnm9
W0813 02:39:51.829] I0813 02:39:51.738738   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663984-20054", Name:"nginx0-deployment-57475bf54d", UID:"7c82e115-b36f-4a8f-8d7e-4eacbe40ff83", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-bz9x7
W0813 02:39:51.829] I0813 02:39:51.742425   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565663984-20054", Name:"nginx0-deployment-57475bf54d", UID:"7c82e115-b36f-4a8f-8d7e-4eacbe40ff83", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-8f6cm
W0813 02:39:51.864] E0813 02:39:51.863844   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:39:51.965] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0813 02:39:51.966] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0813 02:39:52.146] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0813 02:39:52.148] (BSuccessful
I0813 02:39:52.149] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0813 02:39:52.149] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0813 02:39:52.150] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0813 02:39:52.150] has:Object 'Kind' is missing
I0813 02:39:52.240] deployment.apps/nginx1-deployment paused
I0813 02:39:52.245] deployment.apps/nginx0-deployment paused
I0813 02:39:52.356] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0813 02:39:52.358] (BSuccessful
I0813 02:39:52.359] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0813 02:39:52.359] has:Object 'Kind' is missing
I0813 02:39:52.454] deployment.apps/nginx1-deployment resumed
I0813 02:39:52.461] deployment.apps/nginx0-deployment resumed
W0813 02:39:52.562] E0813 02:39:52.511412   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:52.619] E0813 02:39:52.618729   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:39:52.720] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0813 02:39:52.720] (BSuccessful
I0813 02:39:52.721] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0813 02:39:52.721] has:Object 'Kind' is missing
I0813 02:39:52.721] Successful
I0813 02:39:52.721] message:deployment.apps/nginx1-deployment 
I0813 02:39:52.721] REVISION  CHANGE-CAUSE
I0813 02:39:52.721] 1         <none>
I0813 02:39:52.721] 
I0813 02:39:52.721] deployment.apps/nginx0-deployment 
I0813 02:39:52.721] REVISION  CHANGE-CAUSE
I0813 02:39:52.721] 1         <none>
I0813 02:39:52.722] 
I0813 02:39:52.722] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0813 02:39:52.722] has:nginx0-deployment
I0813 02:39:52.722] Successful
I0813 02:39:52.722] message:deployment.apps/nginx1-deployment 
I0813 02:39:52.722] REVISION  CHANGE-CAUSE
I0813 02:39:52.722] 1         <none>
I0813 02:39:52.722] 
I0813 02:39:52.722] deployment.apps/nginx0-deployment 
I0813 02:39:52.723] REVISION  CHANGE-CAUSE
I0813 02:39:52.723] 1         <none>
I0813 02:39:52.723] 
I0813 02:39:52.723] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0813 02:39:52.723] has:nginx1-deployment
I0813 02:39:52.723] Successful
I0813 02:39:52.723] message:deployment.apps/nginx1-deployment 
I0813 02:39:52.723] REVISION  CHANGE-CAUSE
I0813 02:39:52.723] 1         <none>
I0813 02:39:52.723] 
I0813 02:39:52.724] deployment.apps/nginx0-deployment 
I0813 02:39:52.724] REVISION  CHANGE-CAUSE
I0813 02:39:52.724] 1         <none>
I0813 02:39:52.724] 
I0813 02:39:52.724] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0813 02:39:52.724] has:Object 'Kind' is missing
I0813 02:39:52.780] deployment.apps "nginx1-deployment" force deleted
I0813 02:39:52.790] deployment.apps "nginx0-deployment" force deleted
W0813 02:39:52.890] E0813 02:39:52.723459   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:52.891] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0813 02:39:52.891] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0813 02:39:52.892] E0813 02:39:52.865608   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:53.513] E0813 02:39:53.512947   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:53.621] E0813 02:39:53.620384   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:53.725] E0813 02:39:53.725132   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:53.868] E0813 02:39:53.867262   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:39:53.968] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:39:54.050] (Breplicationcontroller/busybox0 created
I0813 02:39:54.055] replicationcontroller/busybox1 created
I0813 02:39:54.160] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0813 02:39:54.264] (BSuccessful
I0813 02:39:54.264] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0813 02:39:54.268] message:no rollbacker has been implemented for "ReplicationController"
I0813 02:39:54.268] no rollbacker has been implemented for "ReplicationController"
I0813 02:39:54.268] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 02:39:54.269] has:Object 'Kind' is missing
I0813 02:39:54.366] Successful
I0813 02:39:54.367] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 02:39:54.367] error: replicationcontrollers "busybox0" pausing is not supported
I0813 02:39:54.367] error: replicationcontrollers "busybox1" pausing is not supported
I0813 02:39:54.368] has:Object 'Kind' is missing
I0813 02:39:54.368] Successful
I0813 02:39:54.369] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 02:39:54.369] error: replicationcontrollers "busybox0" pausing is not supported
I0813 02:39:54.369] error: replicationcontrollers "busybox1" pausing is not supported
I0813 02:39:54.369] has:replicationcontrollers "busybox0" pausing is not supported
I0813 02:39:54.370] Successful
I0813 02:39:54.371] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 02:39:54.371] error: replicationcontrollers "busybox0" pausing is not supported
I0813 02:39:54.371] error: replicationcontrollers "busybox1" pausing is not supported
I0813 02:39:54.371] has:replicationcontrollers "busybox1" pausing is not supported
I0813 02:39:54.468] Successful
I0813 02:39:54.468] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 02:39:54.469] error: replicationcontrollers "busybox0" resuming is not supported
I0813 02:39:54.469] error: replicationcontrollers "busybox1" resuming is not supported
I0813 02:39:54.469] has:Object 'Kind' is missing
I0813 02:39:54.470] Successful
I0813 02:39:54.471] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 02:39:54.471] error: replicationcontrollers "busybox0" resuming is not supported
I0813 02:39:54.471] error: replicationcontrollers "busybox1" resuming is not supported
I0813 02:39:54.471] has:replicationcontrollers "busybox0" resuming is not supported
I0813 02:39:54.472] Successful
I0813 02:39:54.473] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0813 02:39:54.473] error: replicationcontrollers "busybox0" resuming is not supported
I0813 02:39:54.473] error: replicationcontrollers "busybox1" resuming is not supported
I0813 02:39:54.474] has:replicationcontrollers "busybox0" resuming is not supported
I0813 02:39:54.556] replicationcontroller "busybox0" force deleted
I0813 02:39:54.561] replicationcontroller "busybox1" force deleted
W0813 02:39:54.662] I0813 02:39:54.053895   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565663984-20054", Name:"busybox0", UID:"0afd36a9-e368-45f4-8482-56429e6aae89", APIVersion:"v1", ResourceVersion:"1070", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-64q7n
W0813 02:39:54.662] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0813 02:39:54.663] I0813 02:39:54.059679   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565663984-20054", Name:"busybox1", UID:"1bbb8543-5354-42a3-9e9f-28ac5f9b9344", APIVersion:"v1", ResourceVersion:"1072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-xmls8
W0813 02:39:54.663] E0813 02:39:54.514680   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:54.664] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0813 02:39:54.664] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0813 02:39:54.664] E0813 02:39:54.622096   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:54.727] E0813 02:39:54.726843   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:54.869] E0813 02:39:54.869161   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:55.517] E0813 02:39:55.516307   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:39:55.617] Recording: run_namespace_tests
I0813 02:39:55.618] Running command: run_namespace_tests
I0813 02:39:55.618] 
I0813 02:39:55.618] +++ Running case: test-cmd.run_namespace_tests 
I0813 02:39:55.618] +++ working dir: /go/src/k8s.io/kubernetes
I0813 02:39:55.618] +++ command: run_namespace_tests
I0813 02:39:55.618] +++ [0813 02:39:55] Testing kubectl(v1:namespaces)
I0813 02:39:55.686] namespace/my-namespace created
I0813 02:39:55.790] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0813 02:39:55.881] (Bnamespace "my-namespace" deleted
W0813 02:39:55.982] E0813 02:39:55.623821   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:55.982] E0813 02:39:55.728479   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:55.983] E0813 02:39:55.870813   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:56.519] E0813 02:39:56.518092   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:56.626] E0813 02:39:56.625734   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:56.731] E0813 02:39:56.730352   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:56.873] E0813 02:39:56.872454   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:57.520] E0813 02:39:57.519754   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:57.628] E0813 02:39:57.627410   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:57.732] E0813 02:39:57.731993   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:57.875] E0813 02:39:57.874260   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:58.522] E0813 02:39:58.521498   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:58.630] E0813 02:39:58.629285   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:58.734] E0813 02:39:58.734041   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:58.876] E0813 02:39:58.875311   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:59.524] E0813 02:39:59.523428   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:59.631] E0813 02:39:59.630954   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:59.736] E0813 02:39:59.735866   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:39:59.878] E0813 02:39:59.877461   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:00.526] E0813 02:40:00.525358   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:00.633] E0813 02:40:00.632739   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:00.738] E0813 02:40:00.737781   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:00.879] E0813 02:40:00.879194   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:00.980] namespace/my-namespace condition met
I0813 02:40:01.068] Successful
I0813 02:40:01.068] message:Error from server (NotFound): namespaces "my-namespace" not found
I0813 02:40:01.069] has: not found
I0813 02:40:01.144] namespace/my-namespace created
I0813 02:40:01.250] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0813 02:40:01.459] (BSuccessful
I0813 02:40:01.460] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0813 02:40:01.460] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0813 02:40:01.463] namespace "namespace-1565663946-9785" deleted
I0813 02:40:01.463] namespace "namespace-1565663947-16523" deleted
I0813 02:40:01.464] namespace "namespace-1565663949-11066" deleted
I0813 02:40:01.464] namespace "namespace-1565663951-10066" deleted
I0813 02:40:01.464] namespace "namespace-1565663983-2740" deleted
I0813 02:40:01.465] namespace "namespace-1565663984-20054" deleted
I0813 02:40:01.465] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0813 02:40:01.465] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0813 02:40:01.466] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0813 02:40:01.466] has:warning: deleting cluster-scoped resources
I0813 02:40:01.467] Successful
I0813 02:40:01.467] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0813 02:40:01.468] namespace "kube-node-lease" deleted
I0813 02:40:01.468] namespace "my-namespace" deleted
I0813 02:40:01.468] namespace "namespace-1565663850-24893" deleted
... skipping 27 lines ...
I0813 02:40:01.480] namespace "namespace-1565663946-9785" deleted
I0813 02:40:01.480] namespace "namespace-1565663947-16523" deleted
I0813 02:40:01.480] namespace "namespace-1565663949-11066" deleted
I0813 02:40:01.481] namespace "namespace-1565663951-10066" deleted
I0813 02:40:01.481] namespace "namespace-1565663983-2740" deleted
I0813 02:40:01.482] namespace "namespace-1565663984-20054" deleted
I0813 02:40:01.482] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0813 02:40:01.482] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0813 02:40:01.483] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0813 02:40:01.483] has:namespace "my-namespace" deleted
W0813 02:40:01.584] E0813 02:40:01.526776   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:01.618] I0813 02:40:01.618026   53229 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0813 02:40:01.635] E0813 02:40:01.634343   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:01.677] I0813 02:40:01.676887   53229 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0813 02:40:01.719] I0813 02:40:01.718527   53229 controller_utils.go:1036] Caches are synced for garbage collector controller
W0813 02:40:01.740] E0813 02:40:01.739685   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:01.778] I0813 02:40:01.777342   53229 controller_utils.go:1036] Caches are synced for resource quota controller
I0813 02:40:01.879] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0813 02:40:01.879] (Bnamespace/other created
I0813 02:40:01.880] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0813 02:40:01.912] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:40:02.081] (Bpod/valid-pod created
W0813 02:40:02.182] E0813 02:40:01.880883   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:02.282] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0813 02:40:02.289] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0813 02:40:02.375] (BSuccessful
I0813 02:40:02.376] message:error: a resource cannot be retrieved by name across all namespaces
I0813 02:40:02.376] has:a resource cannot be retrieved by name across all namespaces
I0813 02:40:02.478] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0813 02:40:02.561] (Bpod "valid-pod" force deleted
I0813 02:40:02.662] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:40:02.742] (Bnamespace "other" deleted
W0813 02:40:02.843] E0813 02:40:02.529083   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:02.843] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0813 02:40:02.844] E0813 02:40:02.635966   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:02.844] E0813 02:40:02.741111   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:02.883] E0813 02:40:02.882129   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:03.531] E0813 02:40:03.530964   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:03.638] E0813 02:40:03.637807   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:03.743] E0813 02:40:03.743078   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:03.884] E0813 02:40:03.884167   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:04.533] E0813 02:40:04.532578   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:04.548] I0813 02:40:04.547939   53229 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1565663984-20054
W0813 02:40:04.553] I0813 02:40:04.552466   53229 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1565663984-20054
W0813 02:40:04.640] E0813 02:40:04.639704   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:04.745] E0813 02:40:04.745079   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:04.886] E0813 02:40:04.885896   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:05.535] E0813 02:40:05.534340   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:05.642] E0813 02:40:05.641573   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:05.747] E0813 02:40:05.746574   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:05.888] E0813 02:40:05.887812   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:06.536] E0813 02:40:06.536308   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:06.643] E0813 02:40:06.642864   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:06.749] E0813 02:40:06.748366   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:06.889] E0813 02:40:06.888688   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:07.539] E0813 02:40:07.538218   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:07.645] E0813 02:40:07.644842   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:07.755] E0813 02:40:07.754212   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:07.863] +++ exit code: 0
I0813 02:40:07.907] Recording: run_secrets_test
I0813 02:40:07.907] Running command: run_secrets_test
I0813 02:40:07.930] 
I0813 02:40:07.932] +++ Running case: test-cmd.run_secrets_test 
I0813 02:40:07.935] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 57 lines ...
I0813 02:40:09.680] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0813 02:40:09.748] (Bsecret "test-secret" deleted
I0813 02:40:09.826] secret/test-secret created
I0813 02:40:09.911] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0813 02:40:09.989] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0813 02:40:10.061] (Bsecret "test-secret" deleted
W0813 02:40:10.161] E0813 02:40:07.890078   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:10.162] I0813 02:40:08.172157   70254 loader.go:375] Config loaded from file:  /tmp/tmp.fhqjrgDgsb/.kube/config
W0813 02:40:10.162] E0813 02:40:08.540309   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:10.163] E0813 02:40:08.645879   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:10.163] E0813 02:40:08.755575   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:10.163] E0813 02:40:08.891189   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:10.164] E0813 02:40:09.541618   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:10.164] E0813 02:40:09.647114   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:10.164] E0813 02:40:09.756708   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:10.165] E0813 02:40:09.892470   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:10.265] secret/secret-string-data created
I0813 02:40:10.301] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0813 02:40:10.378] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0813 02:40:10.459] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0813 02:40:10.531] (Bsecret "secret-string-data" deleted
I0813 02:40:10.617] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:40:10.769] (Bsecret "test-secret" deleted
I0813 02:40:10.846] namespace "test-secrets" deleted
W0813 02:40:10.947] E0813 02:40:10.542757   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:10.947] E0813 02:40:10.648848   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:10.948] E0813 02:40:10.758091   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:10.948] E0813 02:40:10.893826   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:11.544] E0813 02:40:11.544205   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:11.651] E0813 02:40:11.650520   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:11.760] E0813 02:40:11.759455   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:11.895] E0813 02:40:11.895229   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:12.546] E0813 02:40:12.545569   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:12.652] E0813 02:40:12.651811   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:12.761] E0813 02:40:12.760692   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:12.897] E0813 02:40:12.896803   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:13.547] E0813 02:40:13.546913   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:13.653] E0813 02:40:13.653060   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:13.763] E0813 02:40:13.762425   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:13.898] E0813 02:40:13.897952   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:14.548] E0813 02:40:14.548223   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:14.654] E0813 02:40:14.654270   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:14.764] E0813 02:40:14.763755   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:14.899] E0813 02:40:14.899278   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:15.550] E0813 02:40:15.549550   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:15.656] E0813 02:40:15.655644   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:15.766] E0813 02:40:15.765928   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:15.901] E0813 02:40:15.901207   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:16.002] +++ exit code: 0
I0813 02:40:16.002] Recording: run_configmap_tests
I0813 02:40:16.002] Running command: run_configmap_tests
I0813 02:40:16.002] 
I0813 02:40:16.003] +++ Running case: test-cmd.run_configmap_tests 
I0813 02:40:16.003] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0813 02:40:17.042] configmap/test-binary-configmap created
I0813 02:40:17.127] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0813 02:40:17.206] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0813 02:40:17.427] (Bconfigmap "test-configmap" deleted
I0813 02:40:17.503] configmap "test-binary-configmap" deleted
I0813 02:40:17.576] namespace "test-configmaps" deleted
W0813 02:40:17.677] E0813 02:40:16.551016   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:17.677] E0813 02:40:16.656382   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:17.677] E0813 02:40:16.767653   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:17.678] E0813 02:40:16.902276   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:17.678] E0813 02:40:17.552160   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:17.678] E0813 02:40:17.657740   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:17.769] E0813 02:40:17.769190   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:17.904] E0813 02:40:17.903818   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:18.554] E0813 02:40:18.553476   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:18.659] E0813 02:40:18.659059   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:18.771] E0813 02:40:18.770564   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:18.906] E0813 02:40:18.905721   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:19.555] E0813 02:40:19.555071   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:19.661] E0813 02:40:19.660779   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:19.772] E0813 02:40:19.772179   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:19.908] E0813 02:40:19.907547   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:20.557] E0813 02:40:20.556814   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:20.662] E0813 02:40:20.662252   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:20.774] E0813 02:40:20.773805   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:20.909] E0813 02:40:20.909032   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:21.558] E0813 02:40:21.558278   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:21.664] E0813 02:40:21.663894   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:21.775] E0813 02:40:21.775244   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:21.911] E0813 02:40:21.910828   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:22.560] E0813 02:40:22.559853   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:22.665] E0813 02:40:22.664969   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:22.766] +++ exit code: 0
I0813 02:40:22.766] Recording: run_client_config_tests
I0813 02:40:22.766] Running command: run_client_config_tests
I0813 02:40:22.766] 
I0813 02:40:22.767] +++ Running case: test-cmd.run_client_config_tests 
I0813 02:40:22.767] +++ working dir: /go/src/k8s.io/kubernetes
I0813 02:40:22.767] +++ command: run_client_config_tests
I0813 02:40:22.767] +++ [0813 02:40:22] Creating namespace namespace-1565664022-31393
I0813 02:40:22.812] namespace/namespace-1565664022-31393 created
I0813 02:40:22.881] Context "test" modified.
I0813 02:40:22.888] +++ [0813 02:40:22] Testing client config
I0813 02:40:22.958] Successful
I0813 02:40:22.959] message:error: stat missing: no such file or directory
I0813 02:40:22.959] has:missing: no such file or directory
I0813 02:40:23.032] Successful
I0813 02:40:23.032] message:error: stat missing: no such file or directory
I0813 02:40:23.033] has:missing: no such file or directory
I0813 02:40:23.111] Successful
I0813 02:40:23.112] message:error: stat missing: no such file or directory
I0813 02:40:23.112] has:missing: no such file or directory
I0813 02:40:23.181] Successful
I0813 02:40:23.182] message:Error in configuration: context was not found for specified context: missing-context
I0813 02:40:23.182] has:context was not found for specified context: missing-context
I0813 02:40:23.260] Successful
I0813 02:40:23.261] message:error: no server found for cluster "missing-cluster"
I0813 02:40:23.261] has:no server found for cluster "missing-cluster"
I0813 02:40:23.332] Successful
I0813 02:40:23.332] message:error: auth info "missing-user" does not exist
I0813 02:40:23.333] has:auth info "missing-user" does not exist
W0813 02:40:23.433] E0813 02:40:22.776735   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:23.434] E0813 02:40:22.912419   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:23.534] Successful
I0813 02:40:23.535] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0813 02:40:23.535] has:error loading config file
I0813 02:40:23.540] Successful
I0813 02:40:23.540] message:error: stat missing-config: no such file or directory
I0813 02:40:23.540] has:no such file or directory
I0813 02:40:23.553] +++ exit code: 0
I0813 02:40:23.589] Recording: run_service_accounts_tests
I0813 02:40:23.589] Running command: run_service_accounts_tests
I0813 02:40:23.610] 
I0813 02:40:23.613] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0813 02:40:23.946] (Bnamespace/test-service-accounts created
I0813 02:40:24.044] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0813 02:40:24.122] (Bserviceaccount/test-service-account created
I0813 02:40:24.215] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0813 02:40:24.295] (Bserviceaccount "test-service-account" deleted
I0813 02:40:24.385] namespace "test-service-accounts" deleted
W0813 02:40:24.486] E0813 02:40:23.561348   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:24.486] E0813 02:40:23.666613   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:24.487] E0813 02:40:23.778151   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:24.487] E0813 02:40:23.913647   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:24.563] E0813 02:40:24.562915   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:24.668] E0813 02:40:24.667920   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:24.780] E0813 02:40:24.779526   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:24.916] E0813 02:40:24.915660   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:25.565] E0813 02:40:25.564393   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:25.670] E0813 02:40:25.669638   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:25.781] E0813 02:40:25.781085   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:25.918] E0813 02:40:25.917298   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:26.566] E0813 02:40:26.566023   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:26.672] E0813 02:40:26.671325   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:26.783] E0813 02:40:26.782477   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:26.919] E0813 02:40:26.919137   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:27.568] E0813 02:40:27.567687   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:27.673] E0813 02:40:27.672838   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:27.784] E0813 02:40:27.783804   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:27.921] E0813 02:40:27.921071   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:28.570] E0813 02:40:28.570274   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:28.675] E0813 02:40:28.674425   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:28.787] E0813 02:40:28.786482   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:28.923] E0813 02:40:28.922821   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:29.490] +++ exit code: 0
I0813 02:40:29.526] Recording: run_job_tests
I0813 02:40:29.527] Running command: run_job_tests
I0813 02:40:29.550] 
I0813 02:40:29.553] +++ Running case: test-cmd.run_job_tests 
I0813 02:40:29.555] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0813 02:40:30.350] Labels:                        run=pi
I0813 02:40:30.350] Annotations:                   <none>
I0813 02:40:30.350] Schedule:                      59 23 31 2 *
I0813 02:40:30.350] Concurrency Policy:            Allow
I0813 02:40:30.350] Suspend:                       False
I0813 02:40:30.350] Successful Job History Limit:  3
I0813 02:40:30.350] Failed Job History Limit:      1
I0813 02:40:30.350] Starting Deadline Seconds:     <unset>
I0813 02:40:30.350] Selector:                      <unset>
I0813 02:40:30.351] Parallelism:                   <unset>
I0813 02:40:30.351] Completions:                   <unset>
I0813 02:40:30.351] Pod Template:
I0813 02:40:30.351]   Labels:  run=pi
... skipping 32 lines ...
I0813 02:40:30.887]                 run=pi
I0813 02:40:30.887] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0813 02:40:30.887] Controlled By:  CronJob/pi
I0813 02:40:30.887] Parallelism:    1
I0813 02:40:30.887] Completions:    1
I0813 02:40:30.887] Start Time:     Tue, 13 Aug 2019 02:40:30 +0000
I0813 02:40:30.887] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0813 02:40:30.887] Pod Template:
I0813 02:40:30.888]   Labels:  controller-uid=661b180a-4ae7-46fa-aa96-7883dcdc2b22
I0813 02:40:30.888]            job-name=test-job
I0813 02:40:30.888]            run=pi
I0813 02:40:30.888]   Containers:
I0813 02:40:30.888]    pi:
... skipping 14 lines ...
I0813 02:40:30.889] Events:
I0813 02:40:30.889]   Type    Reason            Age   From            Message
I0813 02:40:30.889]   ----    ------            ----  ----            -------
I0813 02:40:30.889]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-bvtsh
I0813 02:40:30.971] job.batch "test-job" deleted
I0813 02:40:31.062] cronjob.batch "pi" deleted
W0813 02:40:31.163] E0813 02:40:29.571743   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:31.163] E0813 02:40:29.675475   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:31.163] E0813 02:40:29.788073   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:31.164] E0813 02:40:29.924311   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:31.164] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0813 02:40:31.164] E0813 02:40:30.573493   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:31.165] I0813 02:40:30.622377   53229 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"661b180a-4ae7-46fa-aa96-7883dcdc2b22", APIVersion:"batch/v1", ResourceVersion:"1350", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-bvtsh
W0813 02:40:31.165] E0813 02:40:30.677078   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:31.165] E0813 02:40:30.789253   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:31.165] E0813 02:40:30.925821   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:31.266] namespace "test-jobs" deleted
W0813 02:40:31.576] E0813 02:40:31.575396   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:31.680] E0813 02:40:31.679311   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:31.793] E0813 02:40:31.793149   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:31.928] E0813 02:40:31.927393   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:32.577] E0813 02:40:32.577087   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:32.681] E0813 02:40:32.681044   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:32.795] E0813 02:40:32.795154   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:32.929] E0813 02:40:32.928847   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:33.579] E0813 02:40:33.578939   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:33.683] E0813 02:40:33.682835   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:33.797] E0813 02:40:33.796897   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:33.931] E0813 02:40:33.930803   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:34.581] E0813 02:40:34.580502   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:34.685] E0813 02:40:34.684561   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:34.799] E0813 02:40:34.798830   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:34.933] E0813 02:40:34.932416   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:35.582] E0813 02:40:35.582066   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:35.686] E0813 02:40:35.686069   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:35.801] E0813 02:40:35.800545   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:35.935] E0813 02:40:35.934201   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:36.309] +++ exit code: 0
I0813 02:40:36.350] Recording: run_create_job_tests
I0813 02:40:36.350] Running command: run_create_job_tests
I0813 02:40:36.373] 
I0813 02:40:36.375] +++ Running case: test-cmd.run_create_job_tests 
I0813 02:40:36.378] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 27 lines ...
I0813 02:40:37.815] +++ [0813 02:40:37] Testing pod templates
I0813 02:40:37.910] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:40:38.081] (Bpodtemplate/nginx created
I0813 02:40:38.180] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0813 02:40:38.256] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0813 02:40:38.257] nginx   nginx        nginx    name=nginx
W0813 02:40:38.358] E0813 02:40:36.583740   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:38.359] I0813 02:40:36.644359   53229 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565664036-19446", Name:"test-job", UID:"c96e96e7-facf-447c-a2ca-4924ea6541d5", APIVersion:"batch/v1", ResourceVersion:"1368", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-xrtl2
W0813 02:40:38.359] E0813 02:40:36.687428   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:38.359] E0813 02:40:36.801939   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:38.360] I0813 02:40:36.913841   53229 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565664036-19446", Name:"test-job-pi", UID:"3cef667d-4e48-414a-b345-667cddf9b562", APIVersion:"batch/v1", ResourceVersion:"1375", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-z5hgg
W0813 02:40:38.360] E0813 02:40:36.935446   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:38.361] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0813 02:40:38.361] I0813 02:40:37.282941   53229 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565664036-19446", Name:"my-pi", UID:"65d3bfb4-6abe-4857-a643-17050c6435f2", APIVersion:"batch/v1", ResourceVersion:"1384", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-8sn2d
W0813 02:40:38.362] E0813 02:40:37.585171   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:38.362] E0813 02:40:37.688899   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:38.362] E0813 02:40:37.803675   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:38.363] E0813 02:40:37.936842   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:38.363] I0813 02:40:38.077726   49759 controller.go:606] quota admission added evaluator for: podtemplates
I0813 02:40:38.464] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0813 02:40:38.532] (Bpodtemplate "nginx" deleted
I0813 02:40:38.640] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:40:38.656] (B+++ exit code: 0
I0813 02:40:38.694] Recording: run_service_tests
... skipping 66 lines ...
I0813 02:40:39.629] Port:              <unset>  6379/TCP
I0813 02:40:39.629] TargetPort:        6379/TCP
I0813 02:40:39.630] Endpoints:         <none>
I0813 02:40:39.630] Session Affinity:  None
I0813 02:40:39.630] Events:            <none>
I0813 02:40:39.630] (B
W0813 02:40:39.730] E0813 02:40:38.586540   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:39.731] E0813 02:40:38.691004   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:39.731] E0813 02:40:38.805693   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:39.731] E0813 02:40:38.938342   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:39.732] E0813 02:40:39.587831   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:39.732] E0813 02:40:39.692515   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:39.808] E0813 02:40:39.807203   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:39.908] Successful describe services:
I0813 02:40:39.908] Name:              kubernetes
I0813 02:40:39.909] Namespace:         default
I0813 02:40:39.909] Labels:            component=apiserver
I0813 02:40:39.909]                    provider=kubernetes
I0813 02:40:39.909] Annotations:       <none>
... skipping 238 lines ...
I0813 02:40:40.786]   selector:
I0813 02:40:40.786]     role: padawan
I0813 02:40:40.786]   sessionAffinity: None
I0813 02:40:40.787]   type: ClusterIP
I0813 02:40:40.787] status:
I0813 02:40:40.787]   loadBalancer: {}
W0813 02:40:40.888] E0813 02:40:39.939668   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:40.889] E0813 02:40:40.589401   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:40.889] E0813 02:40:40.693567   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:40.890] E0813 02:40:40.808834   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:40.890] error: you must specify resources by --filename when --local is set.
W0813 02:40:40.891] Example resource specifications include:
W0813 02:40:40.891]    '-f rsrc.yaml'
W0813 02:40:40.891]    '--filename=rsrc.json'
W0813 02:40:40.942] E0813 02:40:40.941395   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:41.043] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0813 02:40:41.146] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0813 02:40:41.227] (Bservice "redis-master" deleted
I0813 02:40:41.330] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0813 02:40:41.427] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0813 02:40:41.595] (Bservice/redis-master created
I0813 02:40:41.699] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0813 02:40:41.793] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0813 02:40:41.961] (Bservice/service-v1-test created
W0813 02:40:42.062] E0813 02:40:41.590896   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:42.062] E0813 02:40:41.695003   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:42.063] E0813 02:40:41.810383   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:42.063] E0813 02:40:41.943239   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:42.163] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0813 02:40:42.250] (Bservice/service-v1-test replaced
I0813 02:40:42.366] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0813 02:40:42.460] (Bservice "redis-master" deleted
I0813 02:40:42.550] service "service-v1-test" deleted
I0813 02:40:42.650] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0813 02:40:42.742] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0813 02:40:42.906] (Bservice/redis-master created
W0813 02:40:43.007] E0813 02:40:42.592124   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:43.008] E0813 02:40:42.696290   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:43.008] E0813 02:40:42.812023   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:43.008] E0813 02:40:42.944982   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:43.109] service/redis-slave created
I0813 02:40:43.191] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0813 02:40:43.282] (BSuccessful
I0813 02:40:43.283] message:NAME           RSRC
I0813 02:40:43.283] kubernetes     144
I0813 02:40:43.283] redis-master   1418
... skipping 9 lines ...
I0813 02:40:43.957] (Bcore.sh:998: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
I0813 02:40:44.043] (Bservice "beep-boop" deleted
I0813 02:40:44.148] core.sh:1005: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0813 02:40:44.238] (Bcore.sh:1009: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:40:44.354] (Bservice/testmetadata created
I0813 02:40:44.354] deployment.apps/testmetadata created
W0813 02:40:44.455] E0813 02:40:43.593817   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:44.455] E0813 02:40:43.697838   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:44.456] E0813 02:40:43.813415   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:44.456] E0813 02:40:43.946273   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:44.456] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0813 02:40:44.457] I0813 02:40:44.340161   53229 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"93927049-d2c5-4bbb-ac41-179e101b6072", APIVersion:"apps/v1", ResourceVersion:"1434", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-6cdd84c77d to 2
W0813 02:40:44.457] I0813 02:40:44.344497   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"7a3c6e11-e4ae-4f10-8d99-cd7648d7d867", APIVersion:"apps/v1", ResourceVersion:"1435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-kh89b
W0813 02:40:44.457] I0813 02:40:44.349332   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"7a3c6e11-e4ae-4f10-8d99-cd7648d7d867", APIVersion:"apps/v1", ResourceVersion:"1435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-plsn7
I0813 02:40:44.558] core.sh:1013: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
I0813 02:40:44.562] (Bcore.sh:1014: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
... skipping 18 lines ...
I0813 02:40:45.619] apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
I0813 02:40:45.789] (Bdaemonset.apps/bind configured
I0813 02:40:45.895] apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
I0813 02:40:45.981] (Bdaemonset.apps/bind image updated
I0813 02:40:46.078] apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
I0813 02:40:46.170] (Bdaemonset.apps/bind env updated
W0813 02:40:46.271] E0813 02:40:44.595367   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:46.272] E0813 02:40:44.699398   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:46.272] E0813 02:40:44.814816   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:46.272] E0813 02:40:44.947533   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:46.272] I0813 02:40:45.517907   49759 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0813 02:40:46.272] I0813 02:40:45.530259   49759 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0813 02:40:46.273] E0813 02:40:45.596863   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:46.273] E0813 02:40:45.700809   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:46.273] E0813 02:40:45.816249   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:46.273] E0813 02:40:45.949011   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:46.374] apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
I0813 02:40:46.374] (Bdaemonset.apps/bind resource requirements updated
I0813 02:40:46.476] apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
I0813 02:40:46.565] (Bdaemonset.apps/bind restarted
I0813 02:40:46.666] apps.sh:48: Successful get daemonsets bind {{.metadata.generation}}: 5
I0813 02:40:46.745] (Bdaemonset.apps "bind" deleted
... skipping 7 lines ...
I0813 02:40:46.857] +++ [0813 02:40:46] Creating namespace namespace-1565664046-11028
I0813 02:40:46.932] namespace/namespace-1565664046-11028 created
I0813 02:40:47.006] Context "test" modified.
I0813 02:40:47.015] +++ [0813 02:40:47] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
I0813 02:40:47.113] apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:40:47.283] (Bdaemonset.apps/bind created
W0813 02:40:47.384] E0813 02:40:46.598233   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:47.385] E0813 02:40:46.702324   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:47.385] E0813 02:40:46.817973   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:47.385] E0813 02:40:46.950387   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:47.486] apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1565664046-11028"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0813 02:40:47.487]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
I0813 02:40:47.497] (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
I0813 02:40:47.601] apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0813 02:40:47.694] (Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0813 02:40:47.865] (Bdaemonset.apps/bind configured
W0813 02:40:47.966] E0813 02:40:47.600055   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:47.967] E0813 02:40:47.703877   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:47.968] E0813 02:40:47.819206   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:47.968] E0813 02:40:47.952045   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0813 02:40:48.069] apps.sh:77: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0813 02:40:48.077] (Bapps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0813 02:40:48.173] (Bapps.sh:79: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0813 02:40:48.268] (Bapps.sh:80: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1565664046-11028"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:latest","name":"kubernetes-pause"},{"image":"k8s.gcr.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0813 02:40:48.270]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1565664046-11028"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0813 02:40:48.270]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
... skipping 12 lines ...
I0813 02:40:48.562] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0813 02:40:48.659] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0813 02:40:48.763] (Bdaemonset.apps/bind rolled back
I0813 02:40:48.866] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0813 02:40:48.962] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0813 02:40:49.066] (BSuccessful
I0813 02:40:49.066] message:error: unable to find specified revision 1000000 in history
I0813 02:40:49.066] has:unable to find specified revision
I0813 02:40:49.162] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0813 02:40:49.260] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0813 02:40:49.369] (Bdaemonset.apps/bind rolled back
I0813 02:40:49.468] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0813 02:40:49.557] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 13 lines ...
I0813 02:40:50.092] core.sh:1046: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:40:50.240] (Breplicationcontroller/frontend created
I0813 02:40:50.330] replicationcontroller "frontend" deleted
I0813 02:40:50.427] core.sh:1051: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:40:50.522] (Bcore.sh:1055: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0813 02:40:50.682] (Breplicationcontroller/frontend created
W0813 02:40:50.783] E0813 02:40:48.601500   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:50.783] E0813 02:40:48.705226   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:50.788] E0813 02:40:48.781303   53229 daemon_controller.go:302] namespace-1565664046-11028/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1565664046-11028", SelfLink:"/apis/apps/v1/namespaces/namespace-1565664046-11028/daemonsets/bind", UID:"8808b3e9-63ed-42b2-a13c-ba587e11e9bd", ResourceVersion:"1501", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701260847, loc:(*time.Location)(0x7206260)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1565664046-11028\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001ba2760), Fields:(*v1.Fields)(0xc001ba2780)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001ba27a0), Fields:(*v1.Fields)(0xc001ba27c0)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001ba2800), Fields:(*v1.Fields)(0xc001ba2840)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001ba2880), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00274a2c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001f85da0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001ba28c0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000972318)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00274a35c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0813 02:40:50.788] E0813 02:40:48.820674   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:50.788] E0813 02:40:48.953316   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:50.788] E0813 02:40:49.603225   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:50.789] E0813 02:40:49.707174   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:50.789] E0813 02:40:49.822178   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:50.789] E0813 02:40:49.954792   53229 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0813 02:40:50.789] I0813 02:40:50.247161   53229 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565664049-21479", Name:"frontend", UID:"426546fb-dd5f-4d28-922b-0d1731100536", APIVersion:"v1", ResourceVersion:"1513", FieldPath:"