This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: return error when score is out of range
ResultFAILURE
Tests 1 failed / 2470 succeeded
Started2019-08-14 08:00
Elapsed25m14s
Revision
Buildergke-prow-ssd-pool-1a225945-d0kf
Refs master:1f6cb3cb
81015:d3d73aac
pod6f0e1278-be69-11e9-8926-5e2f786d826e
infra-commit89e6e9743
pod6f0e1278-be69-11e9-8926-5e2f786d826e
repok8s.io/kubernetes
repo-commit11883df95b82d4d4f0d050ea2a30aeda4fc3f232
repos{u'k8s.io/kubernetes': u'master:1f6cb3cb9def97320a5412dcbea1661edd95c29e,81015:d3d73aac701cfa7226993f99c9bdddc9c52d09fe'}

Test Failures


k8s.io/kubernetes/test/integration/volumescheduling TestVolumeProvision 13s

go test -v k8s.io/kubernetes/test/integration/volumescheduling -run TestVolumeProvision$
=== RUN   TestVolumeProvision
W0814 08:23:50.047550  112882 feature_gate.go:208] Setting GA feature gate PersistentLocalVolumes=true. It will be removed in a future release.
I0814 08:23:50.047642  112882 feature_gate.go:216] feature gates: &{map[PersistentLocalVolumes:true]}
I0814 08:23:50.048725  112882 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0814 08:23:50.048750  112882 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0814 08:23:50.048762  112882 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0814 08:23:50.048771  112882 master.go:234] Using reconciler: 
I0814 08:23:50.050352  112882 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.050446  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.050462  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.050502  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.050561  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.050933  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.051076  112882 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0814 08:23:50.051110  112882 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.051266  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.051284  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.051317  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.051384  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.051517  112882 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0814 08:23:50.051699  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.052090  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.052232  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.052449  112882 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 08:23:50.052511  112882 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 08:23:50.052709  112882 watch_cache.go:405] Replace watchCache (rev: 55753) 
I0814 08:23:50.053077  112882 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.053187  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.053198  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.053229  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.053234  112882 watch_cache.go:405] Replace watchCache (rev: 55753) 
I0814 08:23:50.053295  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.053903  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.053994  112882 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0814 08:23:50.054015  112882 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.054035  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.054075  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.054085  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.054114  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.054157  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.054164  112882 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0814 08:23:50.054812  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.054850  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.054903  112882 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0814 08:23:50.054942  112882 watch_cache.go:405] Replace watchCache (rev: 55753) 
I0814 08:23:50.054943  112882 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0814 08:23:50.055022  112882 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.055071  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.055081  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.055101  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.055129  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.055304  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.055369  112882 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0814 08:23:50.055414  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.055467  112882 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.055512  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.055524  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.055544  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.055515  112882 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0814 08:23:50.055667  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.055891  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.055976  112882 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0814 08:23:50.056062  112882 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.056107  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.056114  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.056133  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.056160  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.056187  112882 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0814 08:23:50.056348  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.056578  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.056756  112882 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0814 08:23:50.056920  112882 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.057023  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.057068  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.057117  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.057193  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.057265  112882 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0814 08:23:50.057529  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.057643  112882 watch_cache.go:405] Replace watchCache (rev: 55753) 
I0814 08:23:50.057646  112882 watch_cache.go:405] Replace watchCache (rev: 55753) 
I0814 08:23:50.057756  112882 watch_cache.go:405] Replace watchCache (rev: 55753) 
I0814 08:23:50.057993  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.058052  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.058447  112882 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0814 08:23:50.058622  112882 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0814 08:23:50.058734  112882 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.058849  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.058913  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.059015  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.059072  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.059286  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.059305  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.059446  112882 watch_cache.go:405] Replace watchCache (rev: 55754) 
I0814 08:23:50.059799  112882 watch_cache.go:405] Replace watchCache (rev: 55754) 
I0814 08:23:50.060766  112882 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0814 08:23:50.060832  112882 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0814 08:23:50.060938  112882 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.060988  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.060996  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.061017  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.061046  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.061240  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.061289  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.061420  112882 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0814 08:23:50.061493  112882 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0814 08:23:50.061650  112882 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.061825  112882 watch_cache.go:405] Replace watchCache (rev: 55754) 
I0814 08:23:50.062243  112882 watch_cache.go:405] Replace watchCache (rev: 55754) 
I0814 08:23:50.061716  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.062627  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.062689  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.062927  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.063209  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.063328  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.063454  112882 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0814 08:23:50.063498  112882 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0814 08:23:50.063565  112882 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.063630  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.063641  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.063670  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.063712  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.064144  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.064255  112882 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0814 08:23:50.064351  112882 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.064400  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.064409  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.064431  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.064460  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.064484  112882 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0814 08:23:50.064652  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.064826  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.064882  112882 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0814 08:23:50.064966  112882 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.065004  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.065010  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.065029  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.065055  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.065073  112882 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0814 08:23:50.065237  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.065425  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.065483  112882 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0814 08:23:50.065505  112882 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.065564  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.065571  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.065613  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.065652  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.065680  112882 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0814 08:23:50.065876  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.066112  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.066195  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.066206  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.066237  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.066284  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.066340  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.066625  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.066769  112882 watch_cache.go:405] Replace watchCache (rev: 55754) 
I0814 08:23:50.066769  112882 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.066841  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.066851  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.066878  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.066915  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.066958  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.067174  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.067278  112882 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0814 08:23:50.067573  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.067639  112882 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0814 08:23:50.067873  112882 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.068078  112882 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.068131  112882 watch_cache.go:405] Replace watchCache (rev: 55754) 
I0814 08:23:50.068203  112882 watch_cache.go:405] Replace watchCache (rev: 55754) 
I0814 08:23:50.068324  112882 watch_cache.go:405] Replace watchCache (rev: 55754) 
I0814 08:23:50.068858  112882 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.069459  112882 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.069474  112882 watch_cache.go:405] Replace watchCache (rev: 55754) 
I0814 08:23:50.070087  112882 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.070550  112882 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.070839  112882 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.070918  112882 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.071038  112882 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.071329  112882 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.071775  112882 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.071904  112882 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.072363  112882 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.072571  112882 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.072994  112882 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.073198  112882 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.073728  112882 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.073874  112882 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.073949  112882 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.074006  112882 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.074098  112882 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.074161  112882 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.074249  112882 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.074721  112882 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.074889  112882 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.075318  112882 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.075804  112882 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.075993  112882 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.076412  112882 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.076889  112882 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.077121  112882 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.077641  112882 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.078228  112882 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.078694  112882 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.079166  112882 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.079328  112882 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.079413  112882 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0814 08:23:50.079432  112882 master.go:434] Enabling API group "authentication.k8s.io".
I0814 08:23:50.079444  112882 master.go:434] Enabling API group "authorization.k8s.io".
I0814 08:23:50.079550  112882 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.079650  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.079665  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.079697  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.079741  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.080060  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.080184  112882 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 08:23:50.080301  112882 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.080388  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.080397  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.080421  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.080473  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.080502  112882 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 08:23:50.080729  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.080958  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.081073  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.081117  112882 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 08:23:50.081240  112882 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.081260  112882 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 08:23:50.081290  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.081298  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.081322  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.081437  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.081657  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.082531  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.083236  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.083320  112882 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 08:23:50.083339  112882 master.go:434] Enabling API group "autoscaling".
I0814 08:23:50.083445  112882 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.083488  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.083497  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.083521  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.083553  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.083577  112882 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 08:23:50.083764  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.083944  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.084047  112882 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0814 08:23:50.084168  112882 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.084238  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.084246  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.084269  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.084307  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.084341  112882 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0814 08:23:50.084493  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.084711  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.084791  112882 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0814 08:23:50.084805  112882 master.go:434] Enabling API group "batch".
I0814 08:23:50.084915  112882 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.084958  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.084964  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.084989  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.085029  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.085056  112882 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0814 08:23:50.085186  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.086656  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.086993  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.087036  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.087124  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.087158  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.087267  112882 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0814 08:23:50.087279  112882 master.go:434] Enabling API group "certificates.k8s.io".
I0814 08:23:50.087306  112882 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0814 08:23:50.087390  112882 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.087442  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.087449  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.087471  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.087501  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.087722  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.087804  112882 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 08:23:50.087915  112882 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.087976  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.087983  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.088005  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.088045  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.088076  112882 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 08:23:50.088230  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.088453  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.088468  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.088490  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.088840  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.088918  112882 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 08:23:50.088935  112882 master.go:434] Enabling API group "coordination.k8s.io".
I0814 08:23:50.088987  112882 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 08:23:50.089167  112882 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.089215  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.089222  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.089242  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.089281  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.089464  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.090107  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.090111  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.090565  112882 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 08:23:50.090585  112882 master.go:434] Enabling API group "extensions".
I0814 08:23:50.090718  112882 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 08:23:50.090735  112882 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.090789  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.090798  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.090826  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.090887  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.091260  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.091338  112882 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0814 08:23:50.091446  112882 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.091466  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.091479  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.091534  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.091545  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.091568  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.091635  112882 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0814 08:23:50.091637  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.091838  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.091926  112882 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 08:23:50.091935  112882 master.go:434] Enabling API group "networking.k8s.io".
I0814 08:23:50.091969  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.091961  112882 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.092255  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.092263  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.092287  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.092023  112882 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 08:23:50.092166  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.092364  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.092573  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.092669  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.092695  112882 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0814 08:23:50.092707  112882 master.go:434] Enabling API group "node.k8s.io".
I0814 08:23:50.092750  112882 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0814 08:23:50.092808  112882 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.092851  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.092857  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.092888  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.092915  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.093102  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.093156  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.093181  112882 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0814 08:23:50.093267  112882 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.093341  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.093348  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.093346  112882 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0814 08:23:50.093367  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.093410  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.093432  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.093635  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.093679  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.093722  112882 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0814 08:23:50.093734  112882 master.go:434] Enabling API group "policy".
I0814 08:23:50.093757  112882 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.093801  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.093807  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.093824  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.093824  112882 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0814 08:23:50.093864  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.094071  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.094231  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.094313  112882 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 08:23:50.094319  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.094359  112882 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 08:23:50.094411  112882 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.094457  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.094463  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.094507  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.094547  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.094681  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.094905  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.094956  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.095045  112882 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 08:23:50.095077  112882 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.095104  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.095128  112882 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 08:23:50.095135  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.095273  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.095308  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.095345  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.095343  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.095624  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.095734  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.095736  112882 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 08:23:50.095755  112882 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 08:23:50.095883  112882 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.095947  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.095956  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.096029  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.096069  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.096202  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.096327  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.096381  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.096413  112882 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 08:23:50.096434  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.096448  112882 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.096500  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.096505  112882 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 08:23:50.096511  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.096537  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.096663  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.097096  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.097141  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.097181  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.097194  112882 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 08:23:50.097223  112882 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 08:23:50.097313  112882 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.097389  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.097400  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.097425  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.097469  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.097753  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.097836  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.097913  112882 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 08:23:50.097951  112882 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.098035  112882 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 08:23:50.098051  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.098059  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.098104  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.098192  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.098340  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.098369  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.098445  112882 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 08:23:50.098569  112882 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.098681  112882 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 08:23:50.098743  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.098792  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.098828  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.098858  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.099088  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.099121  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.099204  112882 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 08:23:50.099224  112882 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0814 08:23:50.099351  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.099665  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.100052  112882 watch_cache.go:405] Replace watchCache (rev: 55755) 
I0814 08:23:50.100569  112882 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 08:23:50.101161  112882 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.101215  112882 watch_cache.go:405] Replace watchCache (rev: 55756) 
I0814 08:23:50.101281  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.101290  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.101314  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.101365  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.101638  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.101660  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.101773  112882 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 08:23:50.101900  112882 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.101976  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.101985  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.102029  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.102121  112882 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 08:23:50.102240  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.103053  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.103295  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.103076  112882 watch_cache.go:405] Replace watchCache (rev: 55756) 
I0814 08:23:50.103541  112882 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 08:23:50.103846  112882 master.go:434] Enabling API group "scheduling.k8s.io".
I0814 08:23:50.104568  112882 master.go:423] Skipping disabled API group "settings.k8s.io".
I0814 08:23:50.104729  112882 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.104799  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.104813  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.104836  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.103713  112882 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 08:23:50.104975  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.105696  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.105803  112882 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 08:23:50.106033  112882 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.106381  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.106460  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.106551  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.106676  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.106758  112882 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 08:23:50.106986  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.107283  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.107779  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.107480  112882 watch_cache.go:405] Replace watchCache (rev: 55756) 
I0814 08:23:50.108062  112882 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 08:23:50.108165  112882 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.108293  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.108380  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.108464  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.107577  112882 watch_cache.go:405] Replace watchCache (rev: 55756) 
I0814 08:23:50.108846  112882 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 08:23:50.109093  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.110472  112882 watch_cache.go:405] Replace watchCache (rev: 55756) 
I0814 08:23:50.111085  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.111253  112882 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0814 08:23:50.111340  112882 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.111460  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.111549  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.111651  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.111861  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.111957  112882 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0814 08:23:50.112197  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.112528  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.112795  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.113036  112882 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0814 08:23:50.113384  112882 watch_cache.go:405] Replace watchCache (rev: 55756) 
I0814 08:23:50.113134  112882 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0814 08:23:50.114706  112882 watch_cache.go:405] Replace watchCache (rev: 55756) 
I0814 08:23:50.115392  112882 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.115483  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.115496  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.115560  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.115635  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.116304  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.116452  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.116591  112882 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 08:23:50.116759  112882 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.116864  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.116879  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.116909  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.116951  112882 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 08:23:50.117162  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.117992  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.118079  112882 watch_cache.go:405] Replace watchCache (rev: 55757) 
I0814 08:23:50.118112  112882 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 08:23:50.118364  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.118398  112882 master.go:434] Enabling API group "storage.k8s.io".
I0814 08:23:50.118532  112882 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 08:23:50.118682  112882 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.119318  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.119420  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.119174  112882 watch_cache.go:405] Replace watchCache (rev: 55757) 
I0814 08:23:50.119925  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.120268  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.121004  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.121214  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.121452  112882 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0814 08:23:50.121585  112882 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0814 08:23:50.122496  112882 watch_cache.go:405] Replace watchCache (rev: 55757) 
I0814 08:23:50.123200  112882 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.123363  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.123579  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.123707  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.123929  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.124471  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.124670  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.124856  112882 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0814 08:23:50.124956  112882 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0814 08:23:50.125475  112882 watch_cache.go:405] Replace watchCache (rev: 55757) 
I0814 08:23:50.126069  112882 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.126338  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.126768  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.127101  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.127422  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.127840  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.127984  112882 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0814 08:23:50.128114  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.128163  112882 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.128180  112882 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0814 08:23:50.128290  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.128309  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.128536  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.128587  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.128877  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.128907  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.129000  112882 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0814 08:23:50.129037  112882 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0814 08:23:50.129148  112882 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.129206  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.129216  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.129275  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.129339  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.129580  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.129630  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.129970  112882 watch_cache.go:405] Replace watchCache (rev: 55757) 
I0814 08:23:50.130032  112882 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0814 08:23:50.130132  112882 master.go:434] Enabling API group "apps".
I0814 08:23:50.130164  112882 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.130250  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.130262  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.130299  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.130050  112882 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0814 08:23:50.130452  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.130825  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.130913  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.131269  112882 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 08:23:50.131167  112882 watch_cache.go:405] Replace watchCache (rev: 55757) 
I0814 08:23:50.131327  112882 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 08:23:50.131388  112882 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.131853  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.131945  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.132032  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.132176  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.132314  112882 watch_cache.go:405] Replace watchCache (rev: 55757) 
I0814 08:23:50.132534  112882 watch_cache.go:405] Replace watchCache (rev: 55757) 
I0814 08:23:50.132684  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.132829  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.132864  112882 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 08:23:50.132887  112882 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.132941  112882 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 08:23:50.132953  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.132960  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.132979  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.133184  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.133492  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.133673  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.133815  112882 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 08:23:50.133846  112882 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 08:23:50.133853  112882 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.133913  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.133922  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.133950  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.134065  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.134410  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.134420  112882 watch_cache.go:405] Replace watchCache (rev: 55757) 
I0814 08:23:50.134433  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.134541  112882 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 08:23:50.134557  112882 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0814 08:23:50.134585  112882 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.134673  112882 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 08:23:50.134805  112882 client.go:354] parsed scheme: ""
I0814 08:23:50.134818  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:50.134855  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:50.134907  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.135125  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:50.135181  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:50.135247  112882 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 08:23:50.135263  112882 master.go:434] Enabling API group "events.k8s.io".
I0814 08:23:50.135345  112882 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 08:23:50.135503  112882 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.135660  112882 watch_cache.go:405] Replace watchCache (rev: 55757) 
I0814 08:23:50.135807  112882 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.136141  112882 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.136310  112882 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.136453  112882 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.136527  112882 watch_cache.go:405] Replace watchCache (rev: 55757) 
I0814 08:23:50.136636  112882 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.136877  112882 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.137557  112882 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.137882  112882 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.137964  112882 watch_cache.go:405] Replace watchCache (rev: 55757) 
I0814 08:23:50.138673  112882 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.139858  112882 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.140102  112882 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.141119  112882 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.141402  112882 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.142202  112882 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.142498  112882 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.143209  112882 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.143588  112882 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.144324  112882 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.144587  112882 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:23:50.144709  112882 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0814 08:23:50.145321  112882 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.145541  112882 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.145867  112882 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.146690  112882 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.147417  112882 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.148169  112882 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.148485  112882 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.149240  112882 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.149807  112882 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.150041  112882 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.150732  112882 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:23:50.150785  112882 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0814 08:23:50.151495  112882 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.151776  112882 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.152268  112882 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.152926  112882 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.153297  112882 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.153928  112882 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.154475  112882 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.155129  112882 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.155510  112882 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.156038  112882 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.156615  112882 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:23:50.156664  112882 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0814 08:23:50.157155  112882 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.157583  112882 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:23:50.157645  112882 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0814 08:23:50.158070  112882 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.158498  112882 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.158820  112882 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.159245  112882 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.159650  112882 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.160073  112882 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.160560  112882 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:23:50.160634  112882 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0814 08:23:50.161337  112882 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.161992  112882 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.162232  112882 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.163018  112882 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.163221  112882 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.163424  112882 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.163998  112882 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.164216  112882 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.164472  112882 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.165298  112882 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.165509  112882 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.165730  112882 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:23:50.165789  112882 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0814 08:23:50.165804  112882 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0814 08:23:50.166466  112882 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.167145  112882 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.167863  112882 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.168401  112882 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.169078  112882 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"603b5aaa-a10d-4b21-ab4b-ec546bbcb214", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:23:50.171584  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.171624  112882 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0814 08:23:50.171636  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.171668  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.171679  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.171688  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.171748  112882 httplog.go:90] GET /healthz: (306.303µs) 0 [Go-http-client/1.1 127.0.0.1:43116]
I0814 08:23:50.173164  112882 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.430026ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43118]
I0814 08:23:50.175474  112882 httplog.go:90] GET /api/v1/services: (916.873µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43118]
I0814 08:23:50.179309  112882 httplog.go:90] GET /api/v1/services: (1.208063ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43118]
I0814 08:23:50.181424  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.181458  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.181470  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.181484  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.181496  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.181615  112882 httplog.go:90] GET /healthz: (272.339µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.182784  112882 httplog.go:90] GET /api/v1/services: (848.779µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.182784  112882 httplog.go:90] GET /api/v1/services: (898.832µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0814 08:23:50.182935  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.036293ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43118]
I0814 08:23:50.184670  112882 httplog.go:90] POST /api/v1/namespaces: (1.292829ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.185721  112882 httplog.go:90] GET /api/v1/namespaces/kube-public: (701.694µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.187058  112882 httplog.go:90] POST /api/v1/namespaces: (1.063641ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.187984  112882 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (638.067µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.189582  112882 httplog.go:90] POST /api/v1/namespaces: (1.194099ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.272641  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.272675  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.272694  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.272702  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.272707  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.272752  112882 httplog.go:90] GET /healthz: (262.393µs) 0 [Go-http-client/1.1 127.0.0.1:43116]
I0814 08:23:50.282339  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.282376  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.282386  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.282392  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.282401  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.282441  112882 httplog.go:90] GET /healthz: (268.84µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.372449  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.372680  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.372774  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.372836  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.372906  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.373137  112882 httplog.go:90] GET /healthz: (797.839µs) 0 [Go-http-client/1.1 127.0.0.1:43116]
I0814 08:23:50.382275  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.382486  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.382584  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.382709  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.382793  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.383042  112882 httplog.go:90] GET /healthz: (882.786µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.472584  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.472650  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.472670  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.472680  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.472693  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.472729  112882 httplog.go:90] GET /healthz: (307.403µs) 0 [Go-http-client/1.1 127.0.0.1:43116]
I0814 08:23:50.482342  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.482383  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.482393  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.482400  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.482405  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.482452  112882 httplog.go:90] GET /healthz: (255.057µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.572638  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.572670  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.572678  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.572685  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.572690  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.572733  112882 httplog.go:90] GET /healthz: (239.448µs) 0 [Go-http-client/1.1 127.0.0.1:43116]
I0814 08:23:50.582315  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.582492  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.582579  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.582695  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.582815  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.583069  112882 httplog.go:90] GET /healthz: (885.84µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.672559  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.672626  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.672639  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.672649  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.672656  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.672694  112882 httplog.go:90] GET /healthz: (292.792µs) 0 [Go-http-client/1.1 127.0.0.1:43116]
I0814 08:23:50.682418  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.682456  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.682469  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.682478  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.682485  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.682526  112882 httplog.go:90] GET /healthz: (304.478µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.772684  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.772734  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.772747  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.772754  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.772759  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.772806  112882 httplog.go:90] GET /healthz: (366.643µs) 0 [Go-http-client/1.1 127.0.0.1:43116]
I0814 08:23:50.782245  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.782288  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.782301  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.782312  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.782319  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.782358  112882 httplog.go:90] GET /healthz: (228.394µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.872495  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.872530  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.872539  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.872545  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.872551  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.872575  112882 httplog.go:90] GET /healthz: (205.402µs) 0 [Go-http-client/1.1 127.0.0.1:43116]
I0814 08:23:50.882758  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.882786  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.882819  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.882827  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.882832  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.882869  112882 httplog.go:90] GET /healthz: (275.588µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:50.972493  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.972530  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.972539  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.972545  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.972551  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.972621  112882 httplog.go:90] GET /healthz: (254.363µs) 0 [Go-http-client/1.1 127.0.0.1:43116]
I0814 08:23:50.982571  112882 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:23:50.982625  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:50.982637  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:50.982646  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:50.982654  112882 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:50.982691  112882 httplog.go:90] GET /healthz: (253.976µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:51.048251  112882 client.go:354] parsed scheme: ""
I0814 08:23:51.048294  112882 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:23:51.048350  112882 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:23:51.048415  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:51.048757  112882 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:23:51.048780  112882 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:23:51.073390  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.073418  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:51.073425  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:51.073431  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:51.073462  112882 httplog.go:90] GET /healthz: (1.175833ms) 0 [Go-http-client/1.1 127.0.0.1:43116]
I0814 08:23:51.083273  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.083302  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:51.083310  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:51.083315  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:51.083364  112882 httplog.go:90] GET /healthz: (1.169275ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:51.173313  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.472349ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:51.173662  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.173690  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:51.173702  112882 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:23:51.173711  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:23:51.173743  112882 httplog.go:90] GET /healthz: (1.308066ms) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:51.173789  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.952814ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43122]
I0814 08:23:51.175235  112882 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (866.699µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43122]
I0814 08:23:51.175266  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.459731ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
I0814 08:23:51.176663  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.035308ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43122]
I0814 08:23:51.177811  112882 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (5.760104ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43128]
I0814 08:23:51.178143  112882 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.5092ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.179296  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.369957ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43122]
I0814 08:23:51.179409  112882 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.268093ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43128]
I0814 08:23:51.179568  112882 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 08:23:51.180814  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (857.393µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43122]
I0814 08:23:51.180989  112882 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.067894ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43128]
I0814 08:23:51.182365  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (822.398µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43128]
I0814 08:23:51.182964  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.182982  112882 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:23:51.182989  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.183011  112882 httplog.go:90] GET /healthz: (816.482µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.183075  112882 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.485508ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.183229  112882 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 08:23:51.183242  112882 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0814 08:23:51.183673  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (990.612µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43128]
I0814 08:23:51.184725  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (691.996µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.185842  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (779.024µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.187011  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (772.411µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.188718  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.197107ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.188890  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0814 08:23:51.189674  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (593.05µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.191133  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.042863ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.191316  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0814 08:23:51.192342  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (796.777µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.194109  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.375571ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.194304  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0814 08:23:51.195240  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (789.245µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.196679  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.029206ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.196834  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0814 08:23:51.197716  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (668.228µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.200555  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.451339ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.200805  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0814 08:23:51.201954  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (876.211µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.203853  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.641489ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.204049  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0814 08:23:51.205057  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (822.875µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.206911  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.465023ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.207073  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0814 08:23:51.208205  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (970.048µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.210071  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.480647ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.210302  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0814 08:23:51.211322  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (794.5µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.213963  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.082564ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.214271  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0814 08:23:51.215353  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (900.96µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.217340  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.529535ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.217765  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0814 08:23:51.219858  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.828002ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.221622  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.449587ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.221767  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0814 08:23:51.222766  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (832.458µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.225135  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.875215ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.225346  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0814 08:23:51.226441  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (910.652µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.228859  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.81481ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.229067  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0814 08:23:51.230157  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (815.475µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.231928  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.418637ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.232214  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0814 08:23:51.233639  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.049531ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.235484  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.263007ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.235727  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0814 08:23:51.236715  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (811.941µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.238554  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.348408ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.238748  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0814 08:23:51.239730  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (789.463µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.241679  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.523328ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.241977  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0814 08:23:51.242917  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (724.564µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.244555  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.211492ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.244841  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0814 08:23:51.245832  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (803.012µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.247951  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.709856ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.248200  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 08:23:51.249429  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (867.513µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.251210  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.394037ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.251379  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0814 08:23:51.252317  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (723.916µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.254215  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.55545ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.254701  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0814 08:23:51.255707  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (875.473µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.257358  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.195971ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.258031  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0814 08:23:51.259302  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.08386ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.261189  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.397691ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.261464  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0814 08:23:51.262583  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (885.068µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.264196  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.20418ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.264508  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0814 08:23:51.265964  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.205472ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.267707  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.325573ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.267987  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0814 08:23:51.268981  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (787.868µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.271171  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.741453ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.271434  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0814 08:23:51.272408  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (714.463µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.273416  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.273441  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.273476  112882 httplog.go:90] GET /healthz: (1.268076ms) 0 [Go-http-client/1.1 127.0.0.1:43132]
I0814 08:23:51.274547  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.281296ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.274747  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0814 08:23:51.275723  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (720.235µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.277827  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.620498ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.278106  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 08:23:51.279013  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (730.485µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.281288  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.927758ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.281434  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 08:23:51.282367  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (790.944µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.283569  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.284492  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.284804  112882 httplog.go:90] GET /healthz: (2.756142ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.284111  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.386478ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.285422  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 08:23:51.286425  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (717.472µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.288096  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.230782ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.288561  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 08:23:51.289421  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (623.654µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.291269  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.48116ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.291636  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 08:23:51.292537  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (702.101µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.294149  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.167578ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.294369  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 08:23:51.295313  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (669.565µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.296724  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.083906ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.297063  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 08:23:51.297966  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (747.701µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.300430  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.561944ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.300700  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 08:23:51.301860  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (771.981µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.303240  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (943.608µs) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.303494  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 08:23:51.304715  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (845.818µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.306472  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.246941ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.306856  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 08:23:51.308023  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (809.532µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.310079  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.189478ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.310405  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0814 08:23:51.311394  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (724.864µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.314473  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.681542ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.315013  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 08:23:51.316076  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (825.053µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.317497  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.051972ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.317739  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0814 08:23:51.318980  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (855.203µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.320741  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.445837ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.321051  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 08:23:51.322175  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (836.066µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.324071  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.5202ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.324300  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 08:23:51.325303  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (738.866µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.326851  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.111856ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.327153  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 08:23:51.328796  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.446418ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.330636  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.354337ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.330849  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 08:23:51.332444  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.310964ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.334288  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.314251ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.334566  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 08:23:51.335663  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (825.629µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.338266  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.985281ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.338618  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0814 08:23:51.339655  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (866.195µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.341181  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.184584ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.341327  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 08:23:51.342134  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (655.086µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.343833  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.38282ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.344072  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0814 08:23:51.344951  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (740.383µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.346728  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.40711ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.347092  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 08:23:51.348285  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (722.375µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.352838  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.245141ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.353175  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 08:23:51.373131  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.373165  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.373203  112882 httplog.go:90] GET /healthz: (950.181µs) 0 [Go-http-client/1.1 127.0.0.1:43132]
I0814 08:23:51.373432  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.663719ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.383272  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.383328  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.383386  112882 httplog.go:90] GET /healthz: (1.106961ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.393697  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.832952ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.394184  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 08:23:51.413062  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.286672ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.434325  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.498438ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.434575  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 08:23:51.452873  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.084303ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.473771  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.473806  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.473860  112882 httplog.go:90] GET /healthz: (1.563533ms) 0 [Go-http-client/1.1 127.0.0.1:43132]
I0814 08:23:51.474258  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.455924ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.474639  112882 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 08:23:51.483215  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.483244  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.483281  112882 httplog.go:90] GET /healthz: (1.002541ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.493090  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.317786ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.514202  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.375773ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.514809  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0814 08:23:51.533181  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.41815ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.553775  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.871066ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.554064  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0814 08:23:51.573163  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.407934ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.573162  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.573254  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.573276  112882 httplog.go:90] GET /healthz: (972.139µs) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:51.583294  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.583325  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.583578  112882 httplog.go:90] GET /healthz: (1.309771ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.593637  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.832726ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.593844  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0814 08:23:51.613159  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.368684ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.634215  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.373339ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.634618  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0814 08:23:51.653324  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.398651ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.673522  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.750673ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.673977  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.674229  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.674039  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0814 08:23:51.674638  112882 httplog.go:90] GET /healthz: (2.375208ms) 0 [Go-http-client/1.1 127.0.0.1:43132]
I0814 08:23:51.682788  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.682813  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.682866  112882 httplog.go:90] GET /healthz: (723.184µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.692741  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (974.41µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.714018  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.258596ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.714583  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 08:23:51.733475  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.554333ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.753970  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.155905ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.754219  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0814 08:23:51.773401  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.543905ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.774218  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.774374  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.774639  112882 httplog.go:90] GET /healthz: (2.375671ms) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:51.783294  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.783374  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.783412  112882 httplog.go:90] GET /healthz: (1.221167ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.793614  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.86007ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.794054  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0814 08:23:51.813648  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.490171ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.833803  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.031992ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.834230  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0814 08:23:51.853704  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.859521ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.873739  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.873902  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.874067  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.205591ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:51.874124  112882 httplog.go:90] GET /healthz: (1.203577ms) 0 [Go-http-client/1.1 127.0.0.1:43132]
I0814 08:23:51.874333  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0814 08:23:51.883723  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.883919  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.884075  112882 httplog.go:90] GET /healthz: (1.832465ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.893182  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.37007ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.913804  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.017397ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.914284  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 08:23:51.933135  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.339725ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.954235  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.348954ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.954531  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 08:23:51.973479  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.973520  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.973548  112882 httplog.go:90] GET /healthz: (1.222449ms) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:51.973480  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.521894ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.983395  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:51.983441  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:51.983515  112882 httplog.go:90] GET /healthz: (1.285119ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.994209  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.011073ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:51.994487  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 08:23:52.013300  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.393069ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.034249  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.421971ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.034493  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 08:23:52.052992  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.242188ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.073619  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.073806  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.073916  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.114621ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.073922  112882 httplog.go:90] GET /healthz: (1.082597ms) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:52.074435  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 08:23:52.083241  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.083284  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.083318  112882 httplog.go:90] GET /healthz: (1.081291ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.093160  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.35425ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.113963  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.107314ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.114201  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 08:23:52.133106  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.386532ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.154277  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.494691ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.154567  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 08:23:52.173474  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.695582ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.173664  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.173690  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.173731  112882 httplog.go:90] GET /healthz: (1.452856ms) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:52.183643  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.183683  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.183754  112882 httplog.go:90] GET /healthz: (1.492252ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.194338  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.396862ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.194650  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 08:23:52.213155  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.378103ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.234086  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.231503ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.234445  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 08:23:52.253495  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.607337ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.273355  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.273418  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.273453  112882 httplog.go:90] GET /healthz: (1.159928ms) 0 [Go-http-client/1.1 127.0.0.1:43132]
I0814 08:23:52.274260  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.405983ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.274472  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 08:23:52.283476  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.283511  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.283544  112882 httplog.go:90] GET /healthz: (1.30506ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.293427  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.572641ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.314304  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.334995ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.314579  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0814 08:23:52.333469  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.516089ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.353879  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.058105ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.354367  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 08:23:52.372845  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.081283ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.373163  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.373201  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.373244  112882 httplog.go:90] GET /healthz: (993.471µs) 0 [Go-http-client/1.1 127.0.0.1:43132]
I0814 08:23:52.383414  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.383575  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.383751  112882 httplog.go:90] GET /healthz: (1.560133ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.393928  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.185958ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.394378  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0814 08:23:52.413362  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.478809ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.434482  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.642631ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.434853  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 08:23:52.453233  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.400365ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.473636  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.776071ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.473645  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.473675  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.473720  112882 httplog.go:90] GET /healthz: (1.44967ms) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:52.473941  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 08:23:52.483195  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.483636  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.483819  112882 httplog.go:90] GET /healthz: (1.627336ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.493098  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.28903ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.514098  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.277883ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.514415  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 08:23:52.533100  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.299019ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.554074  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.150047ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.554310  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 08:23:52.573445  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.600737ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.573849  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.573873  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.573925  112882 httplog.go:90] GET /healthz: (1.561892ms) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:52.583486  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.583534  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.583621  112882 httplog.go:90] GET /healthz: (1.29823ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.593725  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.88289ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.594131  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 08:23:52.613457  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.397163ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.633939  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.125076ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.634251  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0814 08:23:52.653544  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.52384ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.673267  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.673296  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.673329  112882 httplog.go:90] GET /healthz: (1.028532ms) 0 [Go-http-client/1.1 127.0.0.1:43132]
I0814 08:23:52.673819  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.016553ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.674371  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 08:23:52.683314  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.683493  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.683722  112882 httplog.go:90] GET /healthz: (1.498126ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.693463  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.418405ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.714336  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.405744ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.714797  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0814 08:23:52.733447  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.443168ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.754135  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.291548ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.754528  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 08:23:52.773127  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.213243ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:52.773315  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.773363  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.773400  112882 httplog.go:90] GET /healthz: (1.045651ms) 0 [Go-http-client/1.1 127.0.0.1:43132]
I0814 08:23:52.783492  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.783541  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.783635  112882 httplog.go:90] GET /healthz: (1.299574ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.794745  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.914958ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.795229  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 08:23:52.813465  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.488926ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.833944  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.038897ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.834415  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 08:23:52.853422  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.562979ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.873531  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.873559  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.873587  112882 httplog.go:90] GET /healthz: (1.26983ms) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:52.873815  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.963717ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.874094  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 08:23:52.883584  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.883630  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.883664  112882 httplog.go:90] GET /healthz: (1.503255ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.893110  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.308872ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.914313  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.472865ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.914623  112882 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 08:23:52.933278  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.407026ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.935534  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.550081ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.954984  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.416335ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.955250  112882 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0814 08:23:52.973174  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.973231  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.973277  112882 httplog.go:90] GET /healthz: (936.441µs) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:52.973517  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.723745ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.975162  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.163684ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.983431  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:52.983460  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:52.983506  112882 httplog.go:90] GET /healthz: (1.112601ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.994303  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.327892ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:52.994780  112882 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 08:23:53.013049  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.248638ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.015125  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.299489ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.034002  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.141986ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.034456  112882 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 08:23:53.053437  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.550087ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.056282  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.96865ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.073301  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:53.073336  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:53.073379  112882 httplog.go:90] GET /healthz: (1.028666ms) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:53.074535  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.661977ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.074747  112882 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 08:23:53.083501  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:53.083685  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:53.083806  112882 httplog.go:90] GET /healthz: (1.596071ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.093360  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.622631ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.095769  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.807691ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.114685  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.789734ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.116514  112882 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 08:23:53.133386  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.528226ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.135778  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.631817ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.154074  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.153868ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.154499  112882 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 08:23:53.173387  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:53.173423  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:53.173512  112882 httplog.go:90] GET /healthz: (1.313152ms) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:53.173537  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.672568ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.176475  112882 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.631736ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.183638  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:53.183667  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:53.183697  112882 httplog.go:90] GET /healthz: (1.37927ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.193789  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.994973ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.194038  112882 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 08:23:53.213334  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.426071ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.215186  112882 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.308564ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.234134  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.255262ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.234570  112882 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 08:23:53.253340  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.468102ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.255129  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.377165ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.273202  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:53.273238  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:53.273288  112882 httplog.go:90] GET /healthz: (974.966µs) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:53.274013  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.163952ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.274220  112882 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0814 08:23:53.283335  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:53.283636  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:53.283963  112882 httplog.go:90] GET /healthz: (1.648078ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.292903  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.216783ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.294743  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.241677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.313863  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.060972ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.314220  112882 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 08:23:53.333298  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.455513ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.335028  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.230568ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.353886  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.960939ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.354116  112882 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 08:23:53.372750  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (965.518µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.373043  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:53.373074  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:53.373114  112882 httplog.go:90] GET /healthz: (905.699µs) 0 [Go-http-client/1.1 127.0.0.1:43130]
I0814 08:23:53.374906  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.204162ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.383361  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:53.383417  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:53.383512  112882 httplog.go:90] GET /healthz: (1.265961ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.393910  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.147638ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.394145  112882 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 08:23:53.413367  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.423389ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.415515  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.605913ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.434137  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.253023ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.434382  112882 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 08:23:53.453728  112882 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.72042ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.455843  112882 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.536361ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.474445  112882 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:23:53.474471  112882 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:23:53.474523  112882 httplog.go:90] GET /healthz: (2.12151ms) 0 [Go-http-client/1.1 127.0.0.1:43132]
I0814 08:23:53.474885  112882 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.361606ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:53.475293  112882 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 08:23:53.483696  112882 httplog.go:90] GET /healthz: (1.500787ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:53.484957  112882 httplog.go:90] GET /api/v1/namespaces/default: (941.734µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:53.487361  112882 httplog.go:90] POST /api/v1/namespaces: (1.622101ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:53.489072  112882 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.409517ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:53.492771  112882 httplog.go:90] POST /api/v1/namespaces/default/services: (3.324731ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:53.494472  112882 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.111659ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:53.496622  112882 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.724607ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:53.573682  112882 httplog.go:90] GET /healthz: (1.238381ms) 200 [Go-http-client/1.1 127.0.0.1:43130]
W0814 08:23:53.574369  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:53.574392  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:53.574413  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:53.574425  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:53.574433  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:53.574439  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:53.574463  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:53.574473  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:53.574482  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:53.574543  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:53.574561  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 08:23:53.574587  112882 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0814 08:23:53.574654  112882 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0814 08:23:53.575039  112882 reflector.go:122] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.575072  112882 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.575239  112882 reflector.go:122] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.575255  112882 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.575490  112882 reflector.go:122] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.575510  112882 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.575528  112882 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.575538  112882 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.575848  112882 reflector.go:122] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.575869  112882 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.575919  112882 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.575935  112882 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.576024  112882 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (660.772µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:53.576234  112882 reflector.go:122] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.576249  112882 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.576512  112882 reflector.go:122] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.576536  112882 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.576759  112882 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (512.38µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.576790  112882 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (385.128µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43188]
I0814 08:23:53.576883  112882 get.go:250] Starting watch for /api/v1/nodes, rv=55754 labels= fields= timeout=7m34s
I0814 08:23:53.576919  112882 reflector.go:122] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.576936  112882 reflector.go:160] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.576936  112882 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (548.277µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43184]
I0814 08:23:53.577234  112882 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (372.35µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:23:53.577292  112882 reflector.go:122] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.577307  112882 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.577710  112882 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (510.22µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43186]
I0814 08:23:53.577757  112882 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=55755 labels= fields= timeout=5m52s
I0814 08:23:53.577803  112882 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (484.037µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:23:53.578114  112882 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (1.018769ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43190]
I0814 08:23:53.578128  112882 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=55757 labels= fields= timeout=8m40s
I0814 08:23:53.578385  112882 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (331.074µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43184]
I0814 08:23:53.578726  112882 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=55754 labels= fields= timeout=9m55s
I0814 08:23:53.578757  112882 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=55757 labels= fields= timeout=5m13s
I0814 08:23:53.578734  112882 get.go:250] Starting watch for /api/v1/services, rv=55969 labels= fields= timeout=8m13s
I0814 08:23:53.579064  112882 get.go:250] Starting watch for /api/v1/pods, rv=55754 labels= fields= timeout=5m43s
I0814 08:23:53.579105  112882 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=55756 labels= fields= timeout=5m24s
I0814 08:23:53.579131  112882 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=55753 labels= fields= timeout=6m57s
I0814 08:23:53.580014  112882 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (443.451µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43198]
I0814 08:23:53.580212  112882 reflector.go:122] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.580247  112882 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0814 08:23:53.580839  112882 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=55754 labels= fields= timeout=5m11s
I0814 08:23:53.581294  112882 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (458.976µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43200]
I0814 08:23:53.582079  112882 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=55757 labels= fields= timeout=6m53s
I0814 08:23:53.675324  112882 shared_informer.go:211] caches populated
I0814 08:23:53.775707  112882 shared_informer.go:211] caches populated
I0814 08:23:53.875933  112882 shared_informer.go:211] caches populated
I0814 08:23:53.976341  112882 shared_informer.go:211] caches populated
I0814 08:23:54.076678  112882 shared_informer.go:211] caches populated
I0814 08:23:54.177051  112882 shared_informer.go:211] caches populated
I0814 08:23:54.277279  112882 shared_informer.go:211] caches populated
I0814 08:23:54.377521  112882 shared_informer.go:211] caches populated
I0814 08:23:54.477773  112882 shared_informer.go:211] caches populated
I0814 08:23:54.578035  112882 shared_informer.go:211] caches populated
I0814 08:23:54.678277  112882 shared_informer.go:211] caches populated
I0814 08:23:54.778930  112882 shared_informer.go:211] caches populated
I0814 08:23:54.779262  112882 plugins.go:629] Loaded volume plugin "kubernetes.io/mock-provisioner"
W0814 08:23:54.779297  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:54.779326  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:54.779342  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:54.779360  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:23:54.779380  112882 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 08:23:54.779460  112882 pv_controller_base.go:282] Starting persistent volume controller
I0814 08:23:54.779487  112882 controller_utils.go:1029] Waiting for caches to sync for persistent volume controller
I0814 08:23:54.779664  112882 reflector.go:122] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:54.779681  112882 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0814 08:23:54.780229  112882 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:54.780241  112882 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0814 08:23:54.780376  112882 reflector.go:122] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:54.780396  112882 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0814 08:23:54.780876  112882 reflector.go:122] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:54.780893  112882 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0814 08:23:54.781158  112882 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (831.31µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0814 08:23:54.781256  112882 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (457.926µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43218]
I0814 08:23:54.782170  112882 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (269.642µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0814 08:23:54.782252  112882 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=55753 labels= fields= timeout=9m4s
I0814 08:23:54.782340  112882 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=55754 labels= fields= timeout=5m36s
I0814 08:23:54.782587  112882 reflector.go:122] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:133
I0814 08:23:54.782648  112882 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0814 08:23:54.782897  112882 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=55757 labels= fields= timeout=7m20s
I0814 08:23:54.783328  112882 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (437.444µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0814 08:23:54.783930  112882 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (2.490588ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43222]
I0814 08:23:54.784558  112882 get.go:250] Starting watch for /api/v1/pods, rv=55754 labels= fields= timeout=9m18s
I0814 08:23:54.784786  112882 get.go:250] Starting watch for /api/v1/nodes, rv=55754 labels= fields= timeout=9m33s
I0814 08:23:54.879649  112882 shared_informer.go:211] caches populated
I0814 08:23:54.879681  112882 shared_informer.go:211] caches populated
I0814 08:23:54.879688  112882 controller_utils.go:1036] Caches are synced for persistent volume controller
I0814 08:23:54.879800  112882 pv_controller_base.go:158] controller initialized
I0814 08:23:54.879897  112882 pv_controller_base.go:419] resyncing PV controller
I0814 08:23:54.980010  112882 shared_informer.go:211] caches populated
I0814 08:23:55.080227  112882 shared_informer.go:211] caches populated
I0814 08:23:55.180581  112882 shared_informer.go:211] caches populated
I0814 08:23:55.280876  112882 shared_informer.go:211] caches populated
I0814 08:23:55.284943  112882 node_tree.go:93] Added node "node-1" in group "" to NodeTree
I0814 08:23:55.285054  112882 httplog.go:90] POST /api/v1/nodes: (3.067393ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.287677  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.920369ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.289752  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.370805ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.290180  112882 volume_binding_test.go:751] Running test topolgy unsatisfied
I0814 08:23:55.292178  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.795716ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.294623  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.608392ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.296265  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.286704ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.298429  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (1.65085ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.298784  112882 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-topomismatch", version 56022
I0814 08:23:55.298825  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-topomismatch]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:55.298866  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-topomismatch]: no volume found
I0814 08:23:55.298899  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-topomismatch] status: set phase Pending
I0814 08:23:55.298917  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-topomismatch] status: phase Pending already set
I0814 08:23:55.298948  112882 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e", Name:"pvc-topomismatch", UID:"80577e1d-906b-4dce-9343-ba58cb5f7c5e", APIVersion:"v1", ResourceVersion:"56022", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0814 08:23:55.300924  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.677699ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.301509  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods: (1.620568ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43232]
I0814 08:23:55.302233  112882 scheduling_queue.go:830] About to try and schedule pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch
I0814 08:23:55.302286  112882 scheduler.go:477] Attempting to schedule pod: volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch
I0814 08:23:55.302502  112882 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch", PVC "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-topomismatch" on node "node-1"
I0814 08:23:55.302538  112882 scheduler_binder.go:723] Node "node-1" cannot satisfy provisioning topology requirements of claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-topomismatch"
I0814 08:23:55.302568  112882 factory.go:550] Unable to schedule volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch: no fit: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.; waiting
I0814 08:23:55.302608  112882 factory.go:624] Updating pod condition for volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch to (PodScheduled==False, Reason=Unschedulable)
I0814 08:23:55.304516  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-topomismatch: (1.248365ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
E0814 08:23:55.304996  112882 factory.go:590] pod is already present in the activeQ
I0814 08:23:55.305034  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-topomismatch/status: (2.184087ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43232]
I0814 08:23:55.305677  112882 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.97659ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.306682  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-topomismatch: (1.022973ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43232]
I0814 08:23:55.307152  112882 generic_scheduler.go:337] Preemption will not help schedule pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch on any node.
I0814 08:23:55.307351  112882 scheduling_queue.go:830] About to try and schedule pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch
I0814 08:23:55.307370  112882 scheduler.go:477] Attempting to schedule pod: volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch
I0814 08:23:55.307588  112882 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch", PVC "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-topomismatch" on node "node-1"
I0814 08:23:55.307710  112882 scheduler_binder.go:723] Node "node-1" cannot satisfy provisioning topology requirements of claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-topomismatch"
I0814 08:23:55.307776  112882 factory.go:550] Unable to schedule volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch: no fit: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.; waiting
I0814 08:23:55.307821  112882 factory.go:624] Updating pod condition for volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch to (PodScheduled==False, Reason=Unschedulable)
I0814 08:23:55.309452  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-topomismatch: (1.399524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.310808  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-topomismatch: (2.765038ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.310933  112882 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (2.380719ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43236]
I0814 08:23:55.311072  112882 generic_scheduler.go:337] Preemption will not help schedule pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch on any node.
I0814 08:23:55.404021  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-topomismatch: (1.794402ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.406221  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-topomismatch: (1.450793ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.411662  112882 scheduling_queue.go:830] About to try and schedule pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch
I0814 08:23:55.411936  112882 scheduler.go:473] Skip schedule deleting pod: volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch
I0814 08:23:55.412884  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods: (5.854447ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.414685  112882 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (2.271595ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.418192  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (3.813153ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.418549  112882 pv_controller_base.go:258] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-topomismatch" deleted
I0814 08:23:55.419807  112882 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.122203ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.432173  112882 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (11.953153ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.432527  112882 volume_binding_test.go:751] Running test wait one bound, one provisioned
I0814 08:23:55.434334  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.424494ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.436443  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.672554ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.450034  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (13.062935ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.457104  112882 httplog.go:90] POST /api/v1/persistentvolumes: (5.537283ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.465045  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (4.418768ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.466800  112882 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-canbind", version 56040
I0814 08:23:55.468957  112882 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind", version 56041
I0814 08:23:55.469016  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:55.469047  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: no volume found
I0814 08:23:55.469070  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind] status: set phase Pending
I0814 08:23:55.469090  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind] status: phase Pending already set
I0814 08:23:55.470498  112882 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e", Name:"pvc-w-canbind", UID:"ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad", APIVersion:"v1", ResourceVersion:"56041", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0814 08:23:55.475040  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (6.847304ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.476755  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (4.816999ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.476768  112882 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision", version 56042
I0814 08:23:55.477212  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:55.477328  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: no volume found
I0814 08:23:55.477390  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: set phase Pending
I0814 08:23:55.477503  112882 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e", Name:"pvc-canprovision", UID:"f3654f41-65a8-480d-8259-dfab354c4b47", APIVersion:"v1", ResourceVersion:"56042", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0814 08:23:55.477464  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: phase Pending already set
I0814 08:23:55.480238  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Pending, bound to: "", boundByController: false
I0814 08:23:55.480372  112882 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0814 08:23:55.480407  112882 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Available
I0814 08:23:55.480765  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (2.34311ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.480925  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods: (3.947861ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.482411  112882 scheduling_queue.go:830] About to try and schedule pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canbind-or-provision
I0814 08:23:55.482445  112882 scheduler.go:477] Attempting to schedule pod: volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canbind-or-provision
I0814 08:23:55.482830  112882 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canbind-or-provision", PVC "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" on node "node-1"
I0814 08:23:55.483022  112882 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canbind-or-provision", PVC "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" on node "node-1"
I0814 08:23:55.483233  112882 scheduler_binder.go:733] Provisioning for claims of pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canbind-or-provision" that has no matching volumes on node "node-1" ...
I0814 08:23:55.483395  112882 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canbind-or-provision", node "node-1"
I0814 08:23:55.483473  112882 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind", version 56041
I0814 08:23:55.483530  112882 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision", version 56042
I0814 08:23:55.483651  112882 scheduler_binder.go:331] BindPodVolumes for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canbind-or-provision", node "node-1"
I0814 08:23:55.486897  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-w-canbind: (2.841942ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43238]
I0814 08:23:55.487660  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" with version 56047
I0814 08:23:55.487820  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:55.488004  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: no volume found
I0814 08:23:55.488058  112882 pv_controller.go:1326] provisionClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: started
I0814 08:23:55.488118  112882 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind[ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]]
I0814 08:23:55.488216  112882 pv_controller.go:1372] provisionClaimOperation [volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind] started, class: "wait-68z2"
I0814 08:23:55.488866  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (7.542892ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.489131  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 56046
I0814 08:23:55.489157  112882 pv_controller.go:798] volume "pv-w-canbind" entered phase "Available"
I0814 08:23:55.489183  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 56046
I0814 08:23:55.489205  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "", boundByController: false
I0814 08:23:55.489293  112882 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0814 08:23:55.489307  112882 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Available
I0814 08:23:55.489318  112882 pv_controller.go:780] updating PersistentVolume[pv-w-canbind]: phase Available already set
I0814 08:23:55.490399  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (2.705024ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43238]
I0814 08:23:55.490704  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56048
I0814 08:23:55.490756  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:55.490781  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: no volume found
I0814 08:23:55.490867  112882 pv_controller.go:1326] provisionClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: started
I0814 08:23:55.490884  112882 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision[f3654f41-65a8-480d-8259-dfab354c4b47]]
I0814 08:23:55.490954  112882 pv_controller.go:1372] provisionClaimOperation [volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] started, class: "wait-68z2"
I0814 08:23:55.492217  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" with version 56049
I0814 08:23:55.492254  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:55.492283  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: no volume found
I0814 08:23:55.492291  112882 pv_controller.go:1326] provisionClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: started
I0814 08:23:55.492304  112882 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind[ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]]
I0814 08:23:55.492311  112882 pv_controller.go:1642] operation "provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind[ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]" is already running, skipping
I0814 08:23:55.492930  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-w-canbind: (4.400611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.493304  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" with version 56049
I0814 08:23:55.494775  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (3.313827ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43238]
I0814 08:23:55.495670  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56050
I0814 08:23:55.495753  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:55.495825  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: no volume found
I0814 08:23:55.495867  112882 pv_controller.go:1326] provisionClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: started
I0814 08:23:55.495918  112882 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision[f3654f41-65a8-480d-8259-dfab354c4b47]]
I0814 08:23:55.495957  112882 pv_controller.go:1642] operation "provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision[f3654f41-65a8-480d-8259-dfab354c4b47]" is already running, skipping
I0814 08:23:55.496232  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56050
I0814 08:23:55.497999  112882 httplog.go:90] GET /api/v1/persistentvolumes/pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad: (3.097564ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.498368  112882 pv_controller.go:1476] volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" created
I0814 08:23:55.498413  112882 pv_controller.go:1493] provisionClaimOperation [volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: trying to save volume pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad
I0814 08:23:55.498388  112882 httplog.go:90] GET /api/v1/persistentvolumes/pvc-f3654f41-65a8-480d-8259-dfab354c4b47: (1.120671ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43238]
I0814 08:23:55.498979  112882 pv_controller.go:1476] volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" created
I0814 08:23:55.499037  112882 pv_controller.go:1493] provisionClaimOperation [volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: trying to save volume pvc-f3654f41-65a8-480d-8259-dfab354c4b47
I0814 08:23:55.503347  112882 httplog.go:90] POST /api/v1/persistentvolumes: (4.396334ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.503439  112882 httplog.go:90] POST /api/v1/persistentvolumes: (4.157009ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.503697  112882 pv_controller.go:1501] volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" saved
I0814 08:23:55.503743  112882 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad", version 56052
I0814 08:23:55.503773  112882 pv_controller.go:1554] volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" provisioned for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind"
I0814 08:23:55.503827  112882 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e", Name:"pvc-w-canbind", UID:"ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad", APIVersion:"v1", ResourceVersion:"56049", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad using kubernetes.io/mock-provisioner
I0814 08:23:55.503969  112882 pv_controller.go:1501] volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" saved
I0814 08:23:55.504051  112882 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47", version 56051
I0814 08:23:55.504079  112882 pv_controller.go:1554] volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" provisioned for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:55.504141  112882 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e", Name:"pvc-canprovision", UID:"f3654f41-65a8-480d-8259-dfab354c4b47", APIVersion:"v1", ResourceVersion:"56050", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f3654f41-65a8-480d-8259-dfab354c4b47 using kubernetes.io/mock-provisioner
I0814 08:23:55.504756  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" with version 56051
I0814 08:23:55.504812  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: f3654f41-65a8-480d-8259-dfab354c4b47)", boundByController: true
I0814 08:23:55.504827  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:23:55.504849  112882 pv_controller.go:555] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:55.504861  112882 pv_controller.go:603] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: volume not bound yet, waiting for syncClaim to fix it
I0814 08:23:55.504882  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" with version 56052
I0814 08:23:55.504921  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56050
I0814 08:23:55.504946  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:55.504990  112882 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" found: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: f3654f41-65a8-480d-8259-dfab354c4b47)", boundByController: true
I0814 08:23:55.505011  112882 pv_controller.go:931] binding volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:55.505022  112882 pv_controller.go:829] updating PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:55.505042  112882 pv_controller.go:841] updating PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:55.505061  112882 pv_controller.go:777] updating PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: set phase Bound
I0814 08:23:55.504920  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind (uid: ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad)", boundByController: true
I0814 08:23:55.506339  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind
I0814 08:23:55.506430  112882 pv_controller.go:555] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:55.506480  112882 pv_controller.go:603] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: volume not bound yet, waiting for syncClaim to fix it
I0814 08:23:55.506793  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (2.625994ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.507904  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-f3654f41-65a8-480d-8259-dfab354c4b47/status: (2.254146ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.508286  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" with version 56054
I0814 08:23:55.508382  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: f3654f41-65a8-480d-8259-dfab354c4b47)", boundByController: true
I0814 08:23:55.508422  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:23:55.508496  112882 pv_controller.go:555] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:55.508534  112882 pv_controller.go:603] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: volume not bound yet, waiting for syncClaim to fix it
I0814 08:23:55.516390  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" with version 56054
I0814 08:23:55.516441  112882 pv_controller.go:798] volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" entered phase "Bound"
I0814 08:23:55.516457  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: binding to "pvc-f3654f41-65a8-480d-8259-dfab354c4b47"
I0814 08:23:55.516490  112882 pv_controller.go:901] volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:55.519514  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (4.573ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:55.519962  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (3.154815ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.520255  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56055
I0814 08:23:55.520294  112882 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: bound to "pvc-f3654f41-65a8-480d-8259-dfab354c4b47"
I0814 08:23:55.520306  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: set phase Bound
I0814 08:23:55.522843  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision/status: (2.170817ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.523237  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56057
I0814 08:23:55.523316  112882 pv_controller.go:742] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" entered phase "Bound"
I0814 08:23:55.523397  112882 pv_controller.go:957] volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:55.523508  112882 pv_controller.go:958] volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: f3654f41-65a8-480d-8259-dfab354c4b47)", boundByController: true
I0814 08:23:55.523585  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-f3654f41-65a8-480d-8259-dfab354c4b47", bindCompleted: true, boundByController: true
I0814 08:23:55.523719  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" with version 56049
I0814 08:23:55.523783  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:55.523855  112882 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" found: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind (uid: ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad)", boundByController: true
I0814 08:23:55.523935  112882 pv_controller.go:931] binding volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind"
I0814 08:23:55.523980  112882 pv_controller.go:829] updating PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind"
I0814 08:23:55.524034  112882 pv_controller.go:841] updating PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind"
I0814 08:23:55.524091  112882 pv_controller.go:777] updating PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: set phase Bound
I0814 08:23:55.526974  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad/status: (2.025177ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.527204  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" with version 56058
I0814 08:23:55.527230  112882 pv_controller.go:798] volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" entered phase "Bound"
I0814 08:23:55.527241  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: binding to "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad"
I0814 08:23:55.527264  112882 pv_controller.go:901] volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind"
I0814 08:23:55.528018  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" with version 56058
I0814 08:23:55.528115  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind (uid: ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad)", boundByController: true
I0814 08:23:55.528363  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind
I0814 08:23:55.528440  112882 pv_controller.go:555] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:55.528509  112882 pv_controller.go:603] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: volume not bound yet, waiting for syncClaim to fix it
I0814 08:23:55.529934  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-w-canbind: (2.396636ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.530281  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" with version 56059
I0814 08:23:55.530325  112882 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: bound to "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad"
I0814 08:23:55.530338  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind] status: set phase Bound
I0814 08:23:55.533346  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-w-canbind/status: (2.668313ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.533942  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" with version 56060
I0814 08:23:55.533984  112882 pv_controller.go:742] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" entered phase "Bound"
I0814 08:23:55.534002  112882 pv_controller.go:957] volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind"
I0814 08:23:55.534031  112882 pv_controller.go:958] volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind (uid: ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad)", boundByController: true
I0814 08:23:55.534073  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" status after binding: phase: Bound, bound to: "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad", bindCompleted: true, boundByController: true
I0814 08:23:55.534123  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56057
I0814 08:23:55.534154  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Bound, bound to: "pvc-f3654f41-65a8-480d-8259-dfab354c4b47", bindCompleted: true, boundByController: true
I0814 08:23:55.534200  112882 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" found: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: f3654f41-65a8-480d-8259-dfab354c4b47)", boundByController: true
I0814 08:23:55.534243  112882 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: claim is already correctly bound
I0814 08:23:55.534252  112882 pv_controller.go:931] binding volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:55.534261  112882 pv_controller.go:829] updating PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:55.534289  112882 pv_controller.go:841] updating PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:55.534304  112882 pv_controller.go:777] updating PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: set phase Bound
I0814 08:23:55.534311  112882 pv_controller.go:780] updating PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: phase Bound already set
I0814 08:23:55.534320  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: binding to "pvc-f3654f41-65a8-480d-8259-dfab354c4b47"
I0814 08:23:55.534342  112882 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: already bound to "pvc-f3654f41-65a8-480d-8259-dfab354c4b47"
I0814 08:23:55.534355  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: set phase Bound
I0814 08:23:55.534381  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: phase Bound already set
I0814 08:23:55.534400  112882 pv_controller.go:957] volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:55.534417  112882 pv_controller.go:958] volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: f3654f41-65a8-480d-8259-dfab354c4b47)", boundByController: true
I0814 08:23:55.534430  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-f3654f41-65a8-480d-8259-dfab354c4b47", bindCompleted: true, boundByController: true
I0814 08:23:55.534466  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" with version 56060
I0814 08:23:55.534483  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: phase: Bound, bound to: "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad", bindCompleted: true, boundByController: true
I0814 08:23:55.534497  112882 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" found: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind (uid: ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad)", boundByController: true
I0814 08:23:55.534509  112882 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: claim is already correctly bound
I0814 08:23:55.534521  112882 pv_controller.go:931] binding volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind"
I0814 08:23:55.534527  112882 pv_controller.go:829] updating PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind"
I0814 08:23:55.534544  112882 pv_controller.go:841] updating PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind"
I0814 08:23:55.534550  112882 pv_controller.go:777] updating PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: set phase Bound
I0814 08:23:55.534555  112882 pv_controller.go:780] updating PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: phase Bound already set
I0814 08:23:55.534561  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: binding to "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad"
I0814 08:23:55.534580  112882 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind]: already bound to "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad"
I0814 08:23:55.534587  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind] status: set phase Bound
I0814 08:23:55.534623  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind] status: phase Bound already set
I0814 08:23:55.534632  112882 pv_controller.go:957] volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind"
I0814 08:23:55.534649  112882 pv_controller.go:958] volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind (uid: ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad)", boundByController: true
I0814 08:23:55.534673  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" status after binding: phase: Bound, bound to: "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad", bindCompleted: true, boundByController: true
I0814 08:23:55.574793  112882 cache.go:676] Couldn't expire cache for pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canbind-or-provision. Binding is still in progress.
I0814 08:23:55.584103  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canbind-or-provision: (2.161459ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.683770  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canbind-or-provision: (1.800949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.783902  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canbind-or-provision: (2.017307ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.883983  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canbind-or-provision: (2.09368ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:55.983708  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canbind-or-provision: (1.909527ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.083820  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canbind-or-provision: (2.118638ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.183548  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canbind-or-provision: (1.751254ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.283698  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canbind-or-provision: (1.883579ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.383796  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canbind-or-provision: (1.925685ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.483818  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canbind-or-provision: (2.031463ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.491025  112882 scheduler_binder.go:545] All PVCs for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canbind-or-provision" are bound
I0814 08:23:56.491107  112882 factory.go:615] Attempting to bind pod-pvc-canbind-or-provision to node-1
I0814 08:23:56.494413  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canbind-or-provision/binding: (2.946981ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.494673  112882 scheduler.go:614] pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canbind-or-provision is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0814 08:23:56.497202  112882 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (2.159594ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.583893  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canbind-or-provision: (2.109269ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.585962  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-w-canbind: (1.473325ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.587692  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.326732ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.589194  112882 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind: (1.140139ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.595364  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods: (5.676928ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.599919  112882 pv_controller_base.go:258] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" deleted
I0814 08:23:56.599969  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" with version 56054
I0814 08:23:56.599997  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: f3654f41-65a8-480d-8259-dfab354c4b47)", boundByController: true
I0814 08:23:56.600009  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:23:56.601721  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.486036ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:56.601943  112882 pv_controller.go:547] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision not found
I0814 08:23:56.601974  112882 pv_controller.go:575] volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" is released and reclaim policy "Delete" will be executed
I0814 08:23:56.601986  112882 pv_controller.go:777] updating PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: set phase Released
I0814 08:23:56.603054  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (6.885234ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.603174  112882 pv_controller_base.go:258] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" deleted
I0814 08:23:56.604893  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-f3654f41-65a8-480d-8259-dfab354c4b47/status: (2.571872ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:56.605043  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" with version 56094
I0814 08:23:56.605062  112882 pv_controller.go:798] volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" entered phase "Released"
I0814 08:23:56.605072  112882 pv_controller.go:1022] reclaimVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: policy is Delete
I0814 08:23:56.605088  112882 pv_controller.go:1631] scheduleOperation[delete-pvc-f3654f41-65a8-480d-8259-dfab354c4b47[ce86aaae-d492-4760-bf80-26dddf8bcd44]]
I0814 08:23:56.605111  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" with version 56058
I0814 08:23:56.605131  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind (uid: ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad)", boundByController: true
I0814 08:23:56.605142  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind
I0814 08:23:56.605239  112882 pv_controller.go:1146] deleteVolumeOperation [pvc-f3654f41-65a8-480d-8259-dfab354c4b47] started
I0814 08:23:56.606237  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-w-canbind: (985.661µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:56.606388  112882 httplog.go:90] GET /api/v1/persistentvolumes/pvc-f3654f41-65a8-480d-8259-dfab354c4b47: (825.34µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43252]
I0814 08:23:56.606670  112882 pv_controller.go:547] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind not found
I0814 08:23:56.606681  112882 pv_controller.go:1250] isVolumeReleased[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: volume is released
I0814 08:23:56.606691  112882 pv_controller.go:1285] doDeleteVolume [pvc-f3654f41-65a8-480d-8259-dfab354c4b47]
I0814 08:23:56.606687  112882 pv_controller.go:575] volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" is released and reclaim policy "Delete" will be executed
I0814 08:23:56.606698  112882 pv_controller.go:777] updating PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: set phase Released
I0814 08:23:56.606711  112882 pv_controller.go:1316] volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" deleted
I0814 08:23:56.606718  112882 pv_controller.go:1193] deleteVolumeOperation [pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: success
I0814 08:23:56.608156  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad/status: (1.302602ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:56.608316  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" with version 56096
I0814 08:23:56.608331  112882 pv_controller.go:798] volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" entered phase "Released"
I0814 08:23:56.608339  112882 pv_controller.go:1022] reclaimVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: policy is Delete
I0814 08:23:56.608362  112882 pv_controller.go:1631] scheduleOperation[delete-pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad[7ee45193-6bdf-47f8-88f8-e85490188f73]]
I0814 08:23:56.608383  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" with version 56094
I0814 08:23:56.608398  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: phase: Released, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: f3654f41-65a8-480d-8259-dfab354c4b47)", boundByController: true
I0814 08:23:56.608407  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:23:56.608423  112882 pv_controller.go:547] synchronizing PersistentVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision not found
I0814 08:23:56.608427  112882 pv_controller.go:1022] reclaimVolume[pvc-f3654f41-65a8-480d-8259-dfab354c4b47]: policy is Delete
I0814 08:23:56.608433  112882 pv_controller.go:1631] scheduleOperation[delete-pvc-f3654f41-65a8-480d-8259-dfab354c4b47[ce86aaae-d492-4760-bf80-26dddf8bcd44]]
I0814 08:23:56.608438  112882 pv_controller.go:1642] operation "delete-pvc-f3654f41-65a8-480d-8259-dfab354c4b47[ce86aaae-d492-4760-bf80-26dddf8bcd44]" is already running, skipping
I0814 08:23:56.608460  112882 pv_controller.go:1146] deleteVolumeOperation [pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad] started
I0814 08:23:56.608628  112882 pv_controller_base.go:212] volume "pv-w-canbind" deleted
I0814 08:23:56.608647  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" with version 56096
I0814 08:23:56.608665  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: phase: Released, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind (uid: ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad)", boundByController: true
I0814 08:23:56.608672  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind
I0814 08:23:56.608688  112882 pv_controller.go:547] synchronizing PersistentVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind not found
I0814 08:23:56.608693  112882 pv_controller.go:1022] reclaimVolume[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: policy is Delete
I0814 08:23:56.608703  112882 pv_controller.go:1631] scheduleOperation[delete-pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad[7ee45193-6bdf-47f8-88f8-e85490188f73]]
I0814 08:23:56.608707  112882 pv_controller.go:1642] operation "delete-pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad[7ee45193-6bdf-47f8-88f8-e85490188f73]" is already running, skipping
I0814 08:23:56.609624  112882 httplog.go:90] GET /api/v1/persistentvolumes/pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad: (1.047003ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:56.609874  112882 pv_controller.go:1250] isVolumeReleased[pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: volume is released
I0814 08:23:56.609884  112882 pv_controller.go:1285] doDeleteVolume [pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]
I0814 08:23:56.609900  112882 pv_controller.go:1316] volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" deleted
I0814 08:23:56.609906  112882 pv_controller.go:1193] deleteVolumeOperation [pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad]: success
I0814 08:23:56.611675  112882 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-f3654f41-65a8-480d-8259-dfab354c4b47: (4.869695ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43252]
I0814 08:23:56.612178  112882 pv_controller_base.go:212] volume "pvc-f3654f41-65a8-480d-8259-dfab354c4b47" deleted
I0814 08:23:56.612229  112882 pv_controller_base.go:396] deletion of claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" was already processed
I0814 08:23:56.613841  112882 store.go:228] deletion of /603b5aaa-a10d-4b21-ab4b-ec546bbcb214/persistentvolumes/pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad failed because of a conflict, going to retry
I0814 08:23:56.613935  112882 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad: (3.941076ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0814 08:23:56.614325  112882 httplog.go:90] DELETE /api/v1/persistentvolumes: (10.772848ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.614742  112882 pv_controller.go:1200] failed to delete volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" from database: persistentvolumes "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" not found
I0814 08:23:56.614793  112882 pv_controller_base.go:212] volume "pvc-ae13d4b1-3ec4-44d5-92a4-a8bdc228c9ad" deleted
I0814 08:23:56.614836  112882 pv_controller_base.go:396] deletion of claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind" was already processed
I0814 08:23:56.622648  112882 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.851346ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.622988  112882 volume_binding_test.go:751] Running test one immediate pv prebound, one wait provisioned
I0814 08:23:56.624416  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.248269ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.625748  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.056254ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.627147  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (933.106µs) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.629356  112882 httplog.go:90] POST /api/v1/persistentvolumes: (1.602249ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.629636  112882 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-i-prebound", version 56105
I0814 08:23:56.629718  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound (uid: )", boundByController: false
I0814 08:23:56.629726  112882 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound
I0814 08:23:56.629734  112882 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0814 08:23:56.631077  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (1.268603ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.631564  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (1.587522ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43252]
I0814 08:23:56.631837  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56107
I0814 08:23:56.631916  112882 pv_controller.go:798] volume "pv-i-prebound" entered phase "Available"
I0814 08:23:56.631961  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56107
I0814 08:23:56.632022  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound (uid: )", boundByController: false
I0814 08:23:56.632060  112882 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound
I0814 08:23:56.632095  112882 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0814 08:23:56.632187  112882 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Available already set
I0814 08:23:56.632736  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (1.034174ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.633109  112882 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound", version 56106
I0814 08:23:56.633193  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:56.633252  112882 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Available, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound (uid: )", boundByController: false
I0814 08:23:56.633308  112882 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound"
I0814 08:23:56.633357  112882 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound"
I0814 08:23:56.633396  112882 pv_controller.go:849] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I0814 08:23:56.634807  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods: (1.657978ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0814 08:23:56.635039  112882 scheduling_queue.go:830] About to try and schedule pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned
I0814 08:23:56.635100  112882 scheduler.go:477] Attempting to schedule pod: volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned
E0814 08:23:56.635342  112882 factory.go:566] Error scheduling volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned: pod has unbound immediate PersistentVolumeClaims; retrying
I0814 08:23:56.635437  112882 factory.go:624] Updating pod condition for volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned to (PodScheduled==False, Reason=Unschedulable)
I0814 08:23:56.635822  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (2.17484ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43252]
I0814 08:23:56.635991  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56110
I0814 08:23:56.636011  112882 pv_controller.go:862] updating PersistentVolume[pv-i-prebound]: bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound"
I0814 08:23:56.636019  112882 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Bound
I0814 08:23:56.637616  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56110
I0814 08:23:56.637712  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound (uid: 8ab7e1b5-e658-499f-a40d-667a9a2e7997)", boundByController: false
I0814 08:23:56.637728  112882 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound
I0814 08:23:56.637761  112882 pv_controller.go:555] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:56.637774  112882 pv_controller.go:606] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0814 08:23:56.638106  112882 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.791843ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43254]
I0814 08:23:56.638791  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (2.349807ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43252]
I0814 08:23:56.639163  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned/status: (2.976727ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
E0814 08:23:56.639533  112882 scheduler.go:506] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0814 08:23:56.639659  112882 scheduling_queue.go:830] About to try and schedule pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned
I0814 08:23:56.639671  112882 scheduler.go:477] Attempting to schedule pod: volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned
E0814 08:23:56.639800  112882 factory.go:566] Error scheduling volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned: pod has unbound immediate PersistentVolumeClaims; retrying
I0814 08:23:56.639817  112882 factory.go:624] Updating pod condition for volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned to (PodScheduled==False, Reason=Unschedulable)
E0814 08:23:56.639827  112882 scheduler.go:506] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0814 08:23:56.640080  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (3.764478ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:56.640293  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56113
I0814 08:23:56.640312  112882 pv_controller.go:798] volume "pv-i-prebound" entered phase "Bound"
I0814 08:23:56.640325  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0814 08:23:56.640339  112882 pv_controller.go:901] volume "pv-i-prebound" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound"
I0814 08:23:56.640646  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56113
I0814 08:23:56.640804  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound (uid: 8ab7e1b5-e658-499f-a40d-667a9a2e7997)", boundByController: false
I0814 08:23:56.640899  112882 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound
I0814 08:23:56.640974  112882 pv_controller.go:555] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:56.641041  112882 pv_controller.go:606] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0814 08:23:56.641503  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.36101ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:23:56.641868  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-i-pv-prebound: (1.375749ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:56.641897  112882 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.723177ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43254]
I0814 08:23:56.642319  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound" with version 56115
I0814 08:23:56.642354  112882 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound]: bound to "pv-i-prebound"
I0814 08:23:56.642366  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound] status: set phase Bound
I0814 08:23:56.644155  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-i-pv-prebound/status: (1.580598ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:56.644442  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound" with version 56116
I0814 08:23:56.644488  112882 pv_controller.go:742] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound" entered phase "Bound"
I0814 08:23:56.644502  112882 pv_controller.go:957] volume "pv-i-prebound" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound"
I0814 08:23:56.644631  112882 pv_controller.go:958] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound (uid: 8ab7e1b5-e658-499f-a40d-667a9a2e7997)", boundByController: false
I0814 08:23:56.644645  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0814 08:23:56.644681  112882 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision", version 56108
I0814 08:23:56.644698  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:56.644749  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: no volume found
I0814 08:23:56.644785  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: set phase Pending
I0814 08:23:56.644797  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: phase Pending already set
I0814 08:23:56.644811  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound" with version 56116
I0814 08:23:56.644827  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0814 08:23:56.644840  112882 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound (uid: 8ab7e1b5-e658-499f-a40d-667a9a2e7997)", boundByController: false
I0814 08:23:56.644849  112882 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound]: claim is already correctly bound
I0814 08:23:56.644864  112882 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound"
I0814 08:23:56.644876  112882 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound"
I0814 08:23:56.644889  112882 pv_controller.go:841] updating PersistentVolume[pv-i-prebound]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound"
I0814 08:23:56.644896  112882 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Bound
I0814 08:23:56.644903  112882 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I0814 08:23:56.644910  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0814 08:23:56.644925  112882 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I0814 08:23:56.644932  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound] status: set phase Bound
I0814 08:23:56.644945  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound] status: phase Bound already set
I0814 08:23:56.644953  112882 pv_controller.go:957] volume "pv-i-prebound" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound"
I0814 08:23:56.644965  112882 pv_controller.go:958] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound (uid: 8ab7e1b5-e658-499f-a40d-667a9a2e7997)", boundByController: false
I0814 08:23:56.644975  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0814 08:23:56.644988  112882 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e", Name:"pvc-canprovision", UID:"6da00561-5237-4b25-a5bf-eccc38cd3386", APIVersion:"v1", ResourceVersion:"56108", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0814 08:23:56.646704  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.561445ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:56.737499  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.685405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:56.837702  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (2.003363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:56.937915  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (2.181966ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:57.037619  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.56008ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:57.138373  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (2.551767ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:57.237662  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.972817ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:57.337825  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (2.082344ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:57.437176  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.586413ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:57.537702  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.953486ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:57.637782  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.982351ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:57.737350  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.594855ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:57.837319  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.785133ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:57.937332  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.611786ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.037444  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.724441ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.137741  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (2.090838ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.237192  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.561606ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.337563  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.896254ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.437104  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.504447ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.537471  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.774234ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.575428  112882 scheduling_queue.go:830] About to try and schedule pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned
I0814 08:23:58.575471  112882 scheduler.go:477] Attempting to schedule pod: volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned
I0814 08:23:58.575778  112882 scheduler_binder.go:651] All bound volumes for Pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned" match with Node "node-1"
I0814 08:23:58.575813  112882 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned", PVC "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" on node "node-1"
I0814 08:23:58.575832  112882 scheduler_binder.go:733] Provisioning for claims of pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned" that has no matching volumes on node "node-1" ...
I0814 08:23:58.575937  112882 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned", node "node-1"
I0814 08:23:58.575976  112882 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision", version 56108
I0814 08:23:58.576034  112882 scheduler_binder.go:331] BindPodVolumes for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned", node "node-1"
I0814 08:23:58.578625  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (2.136608ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.578934  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56148
I0814 08:23:58.578973  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:58.579004  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: no volume found
I0814 08:23:58.579017  112882 pv_controller.go:1326] provisionClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: started
I0814 08:23:58.579035  112882 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision[6da00561-5237-4b25-a5bf-eccc38cd3386]]
I0814 08:23:58.579117  112882 pv_controller.go:1372] provisionClaimOperation [volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] started, class: "wait-r624"
I0814 08:23:58.580821  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.437951ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.581047  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56149
I0814 08:23:58.581056  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56149
I0814 08:23:58.581085  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:58.581109  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: no volume found
I0814 08:23:58.581118  112882 pv_controller.go:1326] provisionClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: started
I0814 08:23:58.581130  112882 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision[6da00561-5237-4b25-a5bf-eccc38cd3386]]
I0814 08:23:58.581137  112882 pv_controller.go:1642] operation "provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision[6da00561-5237-4b25-a5bf-eccc38cd3386]" is already running, skipping
I0814 08:23:58.582146  112882 httplog.go:90] GET /api/v1/persistentvolumes/pvc-6da00561-5237-4b25-a5bf-eccc38cd3386: (950.018µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.582362  112882 pv_controller.go:1476] volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" created
I0814 08:23:58.582392  112882 pv_controller.go:1493] provisionClaimOperation [volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: trying to save volume pvc-6da00561-5237-4b25-a5bf-eccc38cd3386
I0814 08:23:58.583924  112882 httplog.go:90] POST /api/v1/persistentvolumes: (1.27877ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.584163  112882 pv_controller.go:1501] volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" saved
I0814 08:23:58.584189  112882 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386", version 56150
I0814 08:23:58.584211  112882 pv_controller.go:1554] volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" provisioned for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:58.584441  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" with version 56150
I0814 08:23:58.584451  112882 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e", Name:"pvc-canprovision", UID:"6da00561-5237-4b25-a5bf-eccc38cd3386", APIVersion:"v1", ResourceVersion:"56149", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-6da00561-5237-4b25-a5bf-eccc38cd3386 using kubernetes.io/mock-provisioner
I0814 08:23:58.584473  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 6da00561-5237-4b25-a5bf-eccc38cd3386)", boundByController: true
I0814 08:23:58.584486  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:23:58.584503  112882 pv_controller.go:555] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:58.584516  112882 pv_controller.go:603] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: volume not bound yet, waiting for syncClaim to fix it
I0814 08:23:58.584542  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56149
I0814 08:23:58.584555  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:58.584582  112882 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" found: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 6da00561-5237-4b25-a5bf-eccc38cd3386)", boundByController: true
I0814 08:23:58.584628  112882 pv_controller.go:931] binding volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:58.584642  112882 pv_controller.go:829] updating PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:58.584656  112882 pv_controller.go:841] updating PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:58.584664  112882 pv_controller.go:777] updating PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: set phase Bound
I0814 08:23:58.585863  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.359233ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.586144  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-6da00561-5237-4b25-a5bf-eccc38cd3386/status: (1.227575ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:23:58.586570  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" with version 56152
I0814 08:23:58.586622  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" with version 56152
I0814 08:23:58.586622  112882 pv_controller.go:798] volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" entered phase "Bound"
I0814 08:23:58.586647  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: binding to "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386"
I0814 08:23:58.586648  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 6da00561-5237-4b25-a5bf-eccc38cd3386)", boundByController: true
I0814 08:23:58.586662  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:23:58.586662  112882 pv_controller.go:901] volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:58.586677  112882 pv_controller.go:555] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:58.586696  112882 pv_controller.go:603] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: volume not bound yet, waiting for syncClaim to fix it
I0814 08:23:58.588085  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.217986ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.588269  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56153
I0814 08:23:58.588286  112882 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: bound to "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386"
I0814 08:23:58.588295  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: set phase Bound
I0814 08:23:58.589920  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision/status: (1.49383ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.590062  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56154
I0814 08:23:58.590077  112882 pv_controller.go:742] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" entered phase "Bound"
I0814 08:23:58.590088  112882 pv_controller.go:957] volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:58.590104  112882 pv_controller.go:958] volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 6da00561-5237-4b25-a5bf-eccc38cd3386)", boundByController: true
I0814 08:23:58.590116  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386", bindCompleted: true, boundByController: true
I0814 08:23:58.590145  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56154
I0814 08:23:58.590157  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Bound, bound to: "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386", bindCompleted: true, boundByController: true
I0814 08:23:58.590175  112882 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" found: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 6da00561-5237-4b25-a5bf-eccc38cd3386)", boundByController: true
I0814 08:23:58.590188  112882 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: claim is already correctly bound
I0814 08:23:58.590197  112882 pv_controller.go:931] binding volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:58.590207  112882 pv_controller.go:829] updating PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:58.590224  112882 pv_controller.go:841] updating PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:58.590231  112882 pv_controller.go:777] updating PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: set phase Bound
I0814 08:23:58.590237  112882 pv_controller.go:780] updating PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: phase Bound already set
I0814 08:23:58.590242  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: binding to "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386"
I0814 08:23:58.590256  112882 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: already bound to "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386"
I0814 08:23:58.590262  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: set phase Bound
I0814 08:23:58.590276  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: phase Bound already set
I0814 08:23:58.590283  112882 pv_controller.go:957] volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:58.590299  112882 pv_controller.go:958] volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 6da00561-5237-4b25-a5bf-eccc38cd3386)", boundByController: true
I0814 08:23:58.590310  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386", bindCompleted: true, boundByController: true
I0814 08:23:58.637229  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.589941ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.737567  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.887141ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.837759  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.916206ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:58.937374  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.772605ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.037526  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.711594ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.137677  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.994299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.237664  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.957738ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.337674  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.90381ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.437291  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.68349ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.537472  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.819026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.575469  112882 cache.go:676] Couldn't expire cache for pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned. Binding is still in progress.
I0814 08:23:59.579143  112882 scheduler_binder.go:545] All PVCs for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned" are bound
I0814 08:23:59.579210  112882 factory.go:615] Attempting to bind pod-i-pv-prebound-w-provisioned to node-1
I0814 08:23:59.582721  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned/binding: (3.0575ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.583226  112882 scheduler.go:614] pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-pv-prebound-w-provisioned is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0814 08:23:59.585420  112882 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.883176ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.637802  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-pv-prebound-w-provisioned: (1.98942ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.639993  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-i-pv-prebound: (1.550488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.641820  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.330253ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.643810  112882 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-prebound: (1.339852ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.649532  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods: (5.286049ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.654049  112882 pv_controller_base.go:258] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" deleted
I0814 08:23:59.654441  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" with version 56152
I0814 08:23:59.654639  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 6da00561-5237-4b25-a5bf-eccc38cd3386)", boundByController: true
I0814 08:23:59.654710  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:23:59.655838  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (799.774µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:23:59.656028  112882 pv_controller.go:547] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision not found
I0814 08:23:59.656048  112882 pv_controller.go:575] volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" is released and reclaim policy "Delete" will be executed
I0814 08:23:59.656060  112882 pv_controller.go:777] updating PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: set phase Released
I0814 08:23:59.658204  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (8.185977ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.658930  112882 pv_controller_base.go:258] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound" deleted
I0814 08:23:59.659253  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-6da00561-5237-4b25-a5bf-eccc38cd3386/status: (2.988251ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:23:59.659450  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" with version 56198
I0814 08:23:59.659499  112882 pv_controller.go:798] volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" entered phase "Released"
I0814 08:23:59.659511  112882 pv_controller.go:1022] reclaimVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: policy is Delete
I0814 08:23:59.659532  112882 pv_controller.go:1631] scheduleOperation[delete-pvc-6da00561-5237-4b25-a5bf-eccc38cd3386[6c74191b-9471-44bd-93e7-68a4b9428d4e]]
I0814 08:23:59.659567  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56113
I0814 08:23:59.659631  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound (uid: 8ab7e1b5-e658-499f-a40d-667a9a2e7997)", boundByController: false
I0814 08:23:59.659650  112882 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound
I0814 08:23:59.659673  112882 pv_controller.go:547] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound not found
I0814 08:23:59.659693  112882 pv_controller.go:575] volume "pv-i-prebound" is released and reclaim policy "Retain" will be executed
I0814 08:23:59.659707  112882 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Released
I0814 08:23:59.659881  112882 pv_controller.go:1146] deleteVolumeOperation [pvc-6da00561-5237-4b25-a5bf-eccc38cd3386] started
I0814 08:23:59.662142  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (2.222706ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:23:59.663127  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56199
I0814 08:23:59.663159  112882 pv_controller.go:798] volume "pv-i-prebound" entered phase "Released"
I0814 08:23:59.663169  112882 pv_controller.go:1011] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I0814 08:23:59.663193  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" with version 56198
I0814 08:23:59.663216  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: phase: Released, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 6da00561-5237-4b25-a5bf-eccc38cd3386)", boundByController: true
I0814 08:23:59.663231  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:23:59.663260  112882 pv_controller.go:547] synchronizing PersistentVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision not found
I0814 08:23:59.663296  112882 pv_controller.go:1022] reclaimVolume[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: policy is Delete
I0814 08:23:59.663313  112882 pv_controller.go:1631] scheduleOperation[delete-pvc-6da00561-5237-4b25-a5bf-eccc38cd3386[6c74191b-9471-44bd-93e7-68a4b9428d4e]]
I0814 08:23:59.663321  112882 pv_controller.go:1642] operation "delete-pvc-6da00561-5237-4b25-a5bf-eccc38cd3386[6c74191b-9471-44bd-93e7-68a4b9428d4e]" is already running, skipping
I0814 08:23:59.663340  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56199
I0814 08:23:59.663360  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Released, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound (uid: 8ab7e1b5-e658-499f-a40d-667a9a2e7997)", boundByController: false
I0814 08:23:59.663375  112882 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound
I0814 08:23:59.663411  112882 pv_controller.go:547] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound not found
I0814 08:23:59.663424  112882 pv_controller.go:1011] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I0814 08:23:59.663949  112882 pv_controller_base.go:212] volume "pv-i-prebound" deleted
I0814 08:23:59.663982  112882 pv_controller_base.go:396] deletion of claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-i-pv-prebound" was already processed
I0814 08:23:59.664071  112882 httplog.go:90] GET /api/v1/persistentvolumes/pvc-6da00561-5237-4b25-a5bf-eccc38cd3386: (3.598132ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43280]
I0814 08:23:59.664352  112882 pv_controller.go:1250] isVolumeReleased[pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: volume is released
I0814 08:23:59.664372  112882 pv_controller.go:1285] doDeleteVolume [pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]
I0814 08:23:59.664395  112882 pv_controller.go:1316] volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" deleted
I0814 08:23:59.664405  112882 pv_controller.go:1193] deleteVolumeOperation [pvc-6da00561-5237-4b25-a5bf-eccc38cd3386]: success
I0814 08:23:59.666452  112882 httplog.go:90] DELETE /api/v1/persistentvolumes: (7.303523ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.666836  112882 store.go:228] deletion of /603b5aaa-a10d-4b21-ab4b-ec546bbcb214/persistentvolumes/pvc-6da00561-5237-4b25-a5bf-eccc38cd3386 failed because of a conflict, going to retry
I0814 08:23:59.666961  112882 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-6da00561-5237-4b25-a5bf-eccc38cd3386: (2.387535ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43280]
I0814 08:23:59.667157  112882 pv_controller.go:1200] failed to delete volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" from database: persistentvolumes "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" not found
I0814 08:23:59.667490  112882 pv_controller_base.go:212] volume "pvc-6da00561-5237-4b25-a5bf-eccc38cd3386" deleted
I0814 08:23:59.667541  112882 pv_controller_base.go:396] deletion of claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" was already processed
I0814 08:23:59.675011  112882 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.957985ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0814 08:23:59.675227  112882 volume_binding_test.go:751] Running test wait one pv prebound, one provisioned
I0814 08:23:59.676953  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.500706ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43280]
I0814 08:23:59.678678  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.095594ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43280]
I0814 08:23:59.680760  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.245904ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43280]
I0814 08:23:59.682863  112882 httplog.go:90] POST /api/v1/persistentvolumes: (1.632911ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43280]
I0814 08:23:59.683221  112882 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-prebound", version 56208
I0814 08:23:59.683253  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound (uid: )", boundByController: false
I0814 08:23:59.683261  112882 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound
I0814 08:23:59.683270  112882 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Available
I0814 08:23:59.684613  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (1.246579ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43280]
I0814 08:23:59.684929  112882 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound", version 56209
I0814 08:23:59.684956  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:59.684989  112882 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound (uid: )", boundByController: false
I0814 08:23:59.685016  112882 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound"
I0814 08:23:59.685030  112882 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound"
I0814 08:23:59.685051  112882 pv_controller.go:849] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0814 08:23:59.686262  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (1.229594ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43280]
I0814 08:23:59.687900  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (4.432958ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:23:59.688173  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56211
I0814 08:23:59.688206  112882 pv_controller.go:798] volume "pv-w-prebound" entered phase "Available"
I0814 08:23:59.688227  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56211
I0814 08:23:59.688244  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound (uid: )", boundByController: false
I0814 08:23:59.688249  112882 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound
I0814 08:23:59.688263  112882 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Available
I0814 08:23:59.688278  112882 pv_controller.go:780] updating PersistentVolume[pv-w-prebound]: phase Available already set
I0814 08:23:59.689002  112882 store.go:349] GuaranteedUpdate of /603b5aaa-a10d-4b21-ab4b-ec546bbcb214/persistentvolumes/pv-w-prebound failed because of a conflict, going to retry
I0814 08:23:59.689215  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (3.521309ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:23:59.689396  112882 pv_controller.go:852] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0814 08:23:59.689422  112882 pv_controller.go:934] error binding volume "pv-w-prebound" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0814 08:23:59.689440  112882 pv_controller_base.go:246] could not sync claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0814 08:23:59.689467  112882 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision", version 56210
I0814 08:23:59.689485  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:59.689506  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: no volume found
I0814 08:23:59.689529  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: set phase Pending
I0814 08:23:59.689542  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: phase Pending already set
I0814 08:23:59.689583  112882 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e", Name:"pvc-canprovision", UID:"3219f95c-3d1d-448e-b233-32974b83f1ea", APIVersion:"v1", ResourceVersion:"56210", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0814 08:23:59.691774  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (2.000646ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:23:59.692543  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods: (5.809564ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43280]
I0814 08:23:59.692960  112882 scheduling_queue.go:830] About to try and schedule pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-w-pv-prebound-w-provisioned
I0814 08:23:59.693011  112882 scheduler.go:477] Attempting to schedule pod: volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-w-pv-prebound-w-provisioned
I0814 08:23:59.693286  112882 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-w-pv-prebound-w-provisioned", PVC "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" on node "node-1"
I0814 08:23:59.693317  112882 scheduler_binder.go:733] Provisioning for claims of pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-w-pv-prebound-w-provisioned" that has no matching volumes on node "node-1" ...
I0814 08:23:59.693457  112882 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-w-pv-prebound-w-provisioned", node "node-1"
I0814 08:23:59.693499  112882 scheduler_assume_cache.go:320] Assumed v1.PersistentVolume "pv-w-prebound", version 56211
I0814 08:23:59.693514  112882 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision", version 56210
I0814 08:23:59.693629  112882 scheduler_binder.go:331] BindPodVolumes for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-w-pv-prebound-w-provisioned", node "node-1"
I0814 08:23:59.693668  112882 scheduler_binder.go:399] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0814 08:23:59.696042  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56214
I0814 08:23:59.696084  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound (uid: 51a1a4e3-b18e-4714-bc5e-f1c84714f46c)", boundByController: false
I0814 08:23:59.696097  112882 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound
I0814 08:23:59.696114  112882 pv_controller.go:555] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:59.696129  112882 pv_controller.go:606] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0814 08:23:59.696154  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound" with version 56209
I0814 08:23:59.696166  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:59.696192  112882 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Available, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound (uid: 51a1a4e3-b18e-4714-bc5e-f1c84714f46c)", boundByController: false
I0814 08:23:59.696205  112882 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound"
I0814 08:23:59.696216  112882 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound"
I0814 08:23:59.696235  112882 pv_controller.go:841] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound"
I0814 08:23:59.696245  112882 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0814 08:23:59.696801  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (2.839935ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:23:59.697136  112882 scheduler_binder.go:405] updating PersistentVolume[pv-w-prebound]: bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound"
I0814 08:23:59.698274  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.780246ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:23:59.698529  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56215
I0814 08:23:59.698558  112882 pv_controller.go:798] volume "pv-w-prebound" entered phase "Bound"
I0814 08:23:59.698571  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0814 08:23:59.698587  112882 pv_controller.go:901] volume "pv-w-prebound" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound"
I0814 08:23:59.698810  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56215
I0814 08:23:59.698885  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound (uid: 51a1a4e3-b18e-4714-bc5e-f1c84714f46c)", boundByController: false
I0814 08:23:59.698905  112882 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound
I0814 08:23:59.698956  112882 pv_controller.go:555] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:59.699002  112882 pv_controller.go:606] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0814 08:23:59.699144  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.850544ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:23:59.700345  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-w-pv-prebound: (1.521014ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:23:59.700640  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound" with version 56217
I0814 08:23:59.700668  112882 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound]: bound to "pv-w-prebound"
I0814 08:23:59.700696  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound] status: set phase Bound
I0814 08:23:59.702654  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-w-pv-prebound/status: (1.644561ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:23:59.702835  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound" with version 56218
I0814 08:23:59.702874  112882 pv_controller.go:742] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound" entered phase "Bound"
I0814 08:23:59.702891  112882 pv_controller.go:957] volume "pv-w-prebound" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound"
I0814 08:23:59.702942  112882 pv_controller.go:958] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound (uid: 51a1a4e3-b18e-4714-bc5e-f1c84714f46c)", boundByController: false
I0814 08:23:59.702956  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0814 08:23:59.702985  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56216
I0814 08:23:59.702995  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:59.703013  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: no volume found
I0814 08:23:59.703020  112882 pv_controller.go:1326] provisionClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: started
I0814 08:23:59.703032  112882 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision[3219f95c-3d1d-448e-b233-32974b83f1ea]]
I0814 08:23:59.703044  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound" with version 56218
I0814 08:23:59.703064  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound]: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0814 08:23:59.703075  112882 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound (uid: 51a1a4e3-b18e-4714-bc5e-f1c84714f46c)", boundByController: false
I0814 08:23:59.703082  112882 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound]: claim is already correctly bound
I0814 08:23:59.703089  112882 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound"
I0814 08:23:59.703096  112882 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound"
I0814 08:23:59.703108  112882 pv_controller.go:841] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound"
I0814 08:23:59.703114  112882 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0814 08:23:59.703120  112882 pv_controller.go:780] updating PersistentVolume[pv-w-prebound]: phase Bound already set
I0814 08:23:59.703127  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0814 08:23:59.703144  112882 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound]: already bound to "pv-w-prebound"
I0814 08:23:59.703152  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound] status: set phase Bound
I0814 08:23:59.703169  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound] status: phase Bound already set
I0814 08:23:59.703184  112882 pv_controller.go:957] volume "pv-w-prebound" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound"
I0814 08:23:59.703195  112882 pv_controller.go:958] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound (uid: 51a1a4e3-b18e-4714-bc5e-f1c84714f46c)", boundByController: false
I0814 08:23:59.703204  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0814 08:23:59.703246  112882 pv_controller.go:1372] provisionClaimOperation [volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] started, class: "wait-b49c"
I0814 08:23:59.705120  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.668781ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:23:59.705464  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56219
I0814 08:23:59.705487  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:59.705503  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: no volume found
I0814 08:23:59.705510  112882 pv_controller.go:1326] provisionClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: started
I0814 08:23:59.705521  112882 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision[3219f95c-3d1d-448e-b233-32974b83f1ea]]
I0814 08:23:59.705541  112882 pv_controller.go:1642] operation "provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision[3219f95c-3d1d-448e-b233-32974b83f1ea]" is already running, skipping
I0814 08:23:59.705588  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56219
I0814 08:23:59.706699  112882 httplog.go:90] GET /api/v1/persistentvolumes/pvc-3219f95c-3d1d-448e-b233-32974b83f1ea: (885.841µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:23:59.706939  112882 pv_controller.go:1476] volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" created
I0814 08:23:59.707012  112882 pv_controller.go:1493] provisionClaimOperation [volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: trying to save volume pvc-3219f95c-3d1d-448e-b233-32974b83f1ea
I0814 08:23:59.708556  112882 httplog.go:90] POST /api/v1/persistentvolumes: (1.331122ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:23:59.708977  112882 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea", version 56220
I0814 08:23:59.709028  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 3219f95c-3d1d-448e-b233-32974b83f1ea)", boundByController: true
I0814 08:23:59.709039  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:23:59.709053  112882 pv_controller.go:555] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:59.709071  112882 pv_controller.go:603] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: volume not bound yet, waiting for syncClaim to fix it
I0814 08:23:59.708988  112882 pv_controller.go:1501] volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" saved
I0814 08:23:59.709134  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" with version 56220
I0814 08:23:59.709170  112882 pv_controller.go:1554] volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" provisioned for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:59.709136  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56219
I0814 08:23:59.709343  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:59.709370  112882 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" found: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 3219f95c-3d1d-448e-b233-32974b83f1ea)", boundByController: true
I0814 08:23:59.709393  112882 pv_controller.go:931] binding volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:59.709406  112882 pv_controller.go:829] updating PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:59.709419  112882 pv_controller.go:841] updating PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:59.709430  112882 pv_controller.go:777] updating PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: set phase Bound
I0814 08:23:59.709813  112882 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e", Name:"pvc-canprovision", UID:"3219f95c-3d1d-448e-b233-32974b83f1ea", APIVersion:"v1", ResourceVersion:"56219", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-3219f95c-3d1d-448e-b233-32974b83f1ea using kubernetes.io/mock-provisioner
I0814 08:23:59.711198  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.653747ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:23:59.711662  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-3219f95c-3d1d-448e-b233-32974b83f1ea/status: (2.0613ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:23:59.711904  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" with version 56222
I0814 08:23:59.711926  112882 pv_controller.go:798] volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" entered phase "Bound"
I0814 08:23:59.711936  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: binding to "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea"
I0814 08:23:59.711952  112882 pv_controller.go:901] volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:59.713255  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" with version 56222
I0814 08:23:59.713297  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 3219f95c-3d1d-448e-b233-32974b83f1ea)", boundByController: true
I0814 08:23:59.713309  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:23:59.713330  112882 pv_controller.go:555] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:23:59.713349  112882 pv_controller.go:603] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: volume not bound yet, waiting for syncClaim to fix it
I0814 08:23:59.714051  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.777365ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:23:59.714308  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56223
I0814 08:23:59.714346  112882 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: bound to "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea"
I0814 08:23:59.714355  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: set phase Bound
I0814 08:23:59.716800  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision/status: (2.133008ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:23:59.717066  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56224
I0814 08:23:59.717108  112882 pv_controller.go:742] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" entered phase "Bound"
I0814 08:23:59.717122  112882 pv_controller.go:957] volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:59.717141  112882 pv_controller.go:958] volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 3219f95c-3d1d-448e-b233-32974b83f1ea)", boundByController: true
I0814 08:23:59.717159  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea", bindCompleted: true, boundByController: true
I0814 08:23:59.717211  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56224
I0814 08:23:59.717236  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Bound, bound to: "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea", bindCompleted: true, boundByController: true
I0814 08:23:59.717256  112882 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" found: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 3219f95c-3d1d-448e-b233-32974b83f1ea)", boundByController: true
I0814 08:23:59.717274  112882 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: claim is already correctly bound
I0814 08:23:59.717284  112882 pv_controller.go:931] binding volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:59.717294  112882 pv_controller.go:829] updating PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:59.717321  112882 pv_controller.go:841] updating PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:59.717336  112882 pv_controller.go:777] updating PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: set phase Bound
I0814 08:23:59.717347  112882 pv_controller.go:780] updating PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: phase Bound already set
I0814 08:23:59.717357  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: binding to "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea"
I0814 08:23:59.717376  112882 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: already bound to "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea"
I0814 08:23:59.717386  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: set phase Bound
I0814 08:23:59.717408  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: phase Bound already set
I0814 08:23:59.717420  112882 pv_controller.go:957] volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:23:59.717440  112882 pv_controller.go:958] volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 3219f95c-3d1d-448e-b233-32974b83f1ea)", boundByController: true
I0814 08:23:59.717465  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea", bindCompleted: true, boundByController: true
I0814 08:23:59.796039  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-w-pv-prebound-w-provisioned: (2.465892ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:23:59.895255  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-w-pv-prebound-w-provisioned: (1.67851ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:23:59.996183  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-w-pv-prebound-w-provisioned: (2.69138ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.095682  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-w-pv-prebound-w-provisioned: (2.173427ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.195305  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-w-pv-prebound-w-provisioned: (1.894939ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.295486  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-w-pv-prebound-w-provisioned: (1.977522ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.395295  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-w-pv-prebound-w-provisioned: (1.915134ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.495465  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-w-pv-prebound-w-provisioned: (2.076361ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.575752  112882 cache.go:676] Couldn't expire cache for pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-w-pv-prebound-w-provisioned. Binding is still in progress.
I0814 08:24:00.595474  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-w-pv-prebound-w-provisioned: (2.11255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.695104  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-w-pv-prebound-w-provisioned: (1.749859ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.699567  112882 scheduler_binder.go:545] All PVCs for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-w-pv-prebound-w-provisioned" are bound
I0814 08:24:00.699640  112882 factory.go:615] Attempting to bind pod-w-pv-prebound-w-provisioned to node-1
I0814 08:24:00.701966  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-w-pv-prebound-w-provisioned/binding: (2.063046ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.702472  112882 scheduler.go:614] pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-w-pv-prebound-w-provisioned is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0814 08:24:00.704160  112882 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.410538ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.794933  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-w-pv-prebound-w-provisioned: (1.586228ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.797017  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-w-pv-prebound: (1.684366ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.799164  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.686288ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.801078  112882 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-prebound: (1.542571ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.807988  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods: (6.23368ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.814842  112882 pv_controller_base.go:258] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" deleted
I0814 08:24:00.814895  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" with version 56222
I0814 08:24:00.814925  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 3219f95c-3d1d-448e-b233-32974b83f1ea)", boundByController: true
I0814 08:24:00.814935  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:24:00.816338  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.119051ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:24:00.816879  112882 pv_controller.go:547] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision not found
I0814 08:24:00.816905  112882 pv_controller.go:575] volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" is released and reclaim policy "Delete" will be executed
I0814 08:24:00.816929  112882 pv_controller.go:777] updating PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: set phase Released
I0814 08:24:00.817583  112882 pv_controller_base.go:258] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound" deleted
I0814 08:24:00.817845  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (9.002052ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.820717  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-3219f95c-3d1d-448e-b233-32974b83f1ea/status: (3.496827ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:24:00.820969  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" with version 56239
I0814 08:24:00.821015  112882 pv_controller.go:798] volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" entered phase "Released"
I0814 08:24:00.821026  112882 pv_controller.go:1022] reclaimVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: policy is Delete
I0814 08:24:00.821049  112882 pv_controller.go:1631] scheduleOperation[delete-pvc-3219f95c-3d1d-448e-b233-32974b83f1ea[bacfa8b5-1df3-4902-b203-f011519e16e4]]
I0814 08:24:00.821137  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56215
I0814 08:24:00.821255  112882 pv_controller.go:1146] deleteVolumeOperation [pvc-3219f95c-3d1d-448e-b233-32974b83f1ea] started
I0814 08:24:00.821261  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound (uid: 51a1a4e3-b18e-4714-bc5e-f1c84714f46c)", boundByController: false
I0814 08:24:00.821359  112882 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound
I0814 08:24:00.821409  112882 pv_controller.go:547] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound not found
I0814 08:24:00.821453  112882 pv_controller.go:575] volume "pv-w-prebound" is released and reclaim policy "Retain" will be executed
I0814 08:24:00.821494  112882 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Released
I0814 08:24:00.823787  112882 httplog.go:90] GET /api/v1/persistentvolumes/pvc-3219f95c-3d1d-448e-b233-32974b83f1ea: (2.131473ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:24:00.823977  112882 pv_controller.go:1250] isVolumeReleased[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: volume is released
I0814 08:24:00.823986  112882 pv_controller.go:1285] doDeleteVolume [pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]
I0814 08:24:00.824008  112882 pv_controller.go:1316] volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" deleted
I0814 08:24:00.824014  112882 pv_controller.go:1193] deleteVolumeOperation [pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: success
I0814 08:24:00.825022  112882 store.go:228] deletion of /603b5aaa-a10d-4b21-ab4b-ec546bbcb214/persistentvolumes/pv-w-prebound failed because of a conflict, going to retry
I0814 08:24:00.825275  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.790237ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:00.825634  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56240
I0814 08:24:00.825657  112882 pv_controller.go:798] volume "pv-w-prebound" entered phase "Released"
I0814 08:24:00.825668  112882 pv_controller.go:1011] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I0814 08:24:00.825692  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" with version 56239
I0814 08:24:00.825717  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: phase: Released, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 3219f95c-3d1d-448e-b233-32974b83f1ea)", boundByController: true
I0814 08:24:00.825729  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:24:00.825745  112882 pv_controller.go:547] synchronizing PersistentVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision not found
I0814 08:24:00.825750  112882 pv_controller.go:1022] reclaimVolume[pvc-3219f95c-3d1d-448e-b233-32974b83f1ea]: policy is Delete
I0814 08:24:00.825763  112882 pv_controller.go:1631] scheduleOperation[delete-pvc-3219f95c-3d1d-448e-b233-32974b83f1ea[bacfa8b5-1df3-4902-b203-f011519e16e4]]
I0814 08:24:00.825769  112882 pv_controller.go:1642] operation "delete-pvc-3219f95c-3d1d-448e-b233-32974b83f1ea[bacfa8b5-1df3-4902-b203-f011519e16e4]" is already running, skipping
I0814 08:24:00.825780  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56240
I0814 08:24:00.825791  112882 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Released, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound (uid: 51a1a4e3-b18e-4714-bc5e-f1c84714f46c)", boundByController: false
I0814 08:24:00.825798  112882 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound
I0814 08:24:00.825811  112882 pv_controller.go:547] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound not found
I0814 08:24:00.825815  112882 pv_controller.go:1011] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I0814 08:24:00.828777  112882 pv_controller_base.go:212] volume "pv-w-prebound" deleted
I0814 08:24:00.828810  112882 pv_controller_base.go:396] deletion of claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-pv-prebound" was already processed
I0814 08:24:00.830005  112882 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-3219f95c-3d1d-448e-b233-32974b83f1ea: (5.89191ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:24:00.830394  112882 httplog.go:90] DELETE /api/v1/persistentvolumes: (11.868693ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43282]
I0814 08:24:00.830966  112882 pv_controller_base.go:212] volume "pvc-3219f95c-3d1d-448e-b233-32974b83f1ea" deleted
I0814 08:24:00.831010  112882 pv_controller_base.go:396] deletion of claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" was already processed
I0814 08:24:00.838530  112882 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.726171ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:24:00.838831  112882 volume_binding_test.go:751] Running test immediate provisioned by controller
I0814 08:24:00.840407  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.361593ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:24:00.842340  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.465388ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:24:00.843989  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.199438ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:24:00.845956  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (1.399115ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:24:00.846567  112882 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned", version 56249
I0814 08:24:00.846618  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:24:00.846643  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: no volume found
I0814 08:24:00.846652  112882 pv_controller.go:1326] provisionClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: started
I0814 08:24:00.846669  112882 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned[4ad4c275-fbb0-4097-89eb-0e0e2c923a37]]
I0814 08:24:00.846715  112882 pv_controller.go:1372] provisionClaimOperation [volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned] started, class: "immediate-hkwm"
I0814 08:24:00.848890  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods: (2.408856ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:24:00.849195  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned" with version 56250
I0814 08:24:00.849301  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:24:00.849362  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: no volume found
I0814 08:24:00.849407  112882 pv_controller.go:1326] provisionClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: started
I0814 08:24:00.849456  112882 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned[4ad4c275-fbb0-4097-89eb-0e0e2c923a37]]
I0814 08:24:00.849506  112882 pv_controller.go:1642] operation "provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned[4ad4c275-fbb0-4097-89eb-0e0e2c923a37]" is already running, skipping
I0814 08:24:00.849484  112882 scheduling_queue.go:830] About to try and schedule pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-unbound
I0814 08:24:00.849631  112882 scheduler.go:477] Attempting to schedule pod: volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-unbound
I0814 08:24:00.849529  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-controller-provisioned: (2.53266ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:00.849901  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned" with version 56250
E0814 08:24:00.849998  112882 factory.go:566] Error scheduling volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-unbound: pod has unbound immediate PersistentVolumeClaims; retrying
I0814 08:24:00.850404  112882 factory.go:624] Updating pod condition for volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-unbound to (PodScheduled==False, Reason=Unschedulable)
I0814 08:24:00.851964  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.203526ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:24:00.851980  112882 httplog.go:90] GET /api/v1/persistentvolumes/pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37: (1.863635ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:00.852234  112882 pv_controller.go:1476] volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned" created
I0814 08:24:00.852255  112882 pv_controller.go:1493] provisionClaimOperation [volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: trying to save volume pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37
I0814 08:24:00.852691  112882 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.513208ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:00.854203  112882 httplog.go:90] POST /api/v1/persistentvolumes: (1.770462ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0814 08:24:00.854453  112882 pv_controller.go:1501] volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned" saved
I0814 08:24:00.854478  112882 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37", version 56254
I0814 08:24:00.854515  112882 pv_controller.go:1554] volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" provisioned for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned"
I0814 08:24:00.854619  112882 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e", Name:"pvc-controller-provisioned", UID:"4ad4c275-fbb0-4097-89eb-0e0e2c923a37", APIVersion:"v1", ResourceVersion:"56250", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37 using kubernetes.io/mock-provisioner
I0814 08:24:00.854661  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" with version 56254
I0814 08:24:00.854757  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned (uid: 4ad4c275-fbb0-4097-89eb-0e0e2c923a37)", boundByController: true
I0814 08:24:00.854780  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned
I0814 08:24:00.854798  112882 pv_controller.go:555] synchronizing PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:24:00.854812  112882 pv_controller.go:603] synchronizing PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: volume not bound yet, waiting for syncClaim to fix it
I0814 08:24:00.854873  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound/status: (3.523531ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43294]
I0814 08:24:00.854967  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned" with version 56250
I0814 08:24:00.854993  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:24:00.855029  112882 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" found: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned (uid: 4ad4c275-fbb0-4097-89eb-0e0e2c923a37)", boundByController: true
I0814 08:24:00.855042  112882 pv_controller.go:931] binding volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned"
I0814 08:24:00.855055  112882 pv_controller.go:829] updating PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned"
E0814 08:24:00.855090  112882 scheduler.go:506] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0814 08:24:00.855184  112882 pv_controller.go:841] updating PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned"
I0814 08:24:00.855205  112882 pv_controller.go:777] updating PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: set phase Bound
I0814 08:24:00.857825  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (2.848411ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:00.857885  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37/status: (2.31488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43294]
I0814 08:24:00.857889  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" with version 56257
I0814 08:24:00.857982  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned (uid: 4ad4c275-fbb0-4097-89eb-0e0e2c923a37)", boundByController: true
I0814 08:24:00.858074  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned
I0814 08:24:00.858089  112882 pv_controller.go:555] synchronizing PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:24:00.858101  112882 pv_controller.go:603] synchronizing PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: volume not bound yet, waiting for syncClaim to fix it
I0814 08:24:00.858114  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" with version 56257
I0814 08:24:00.858132  112882 pv_controller.go:798] volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" entered phase "Bound"
I0814 08:24:00.858143  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: binding to "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37"
I0814 08:24:00.858156  112882 pv_controller.go:901] volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned"
I0814 08:24:00.864440  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-controller-provisioned: (5.418506ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:00.865085  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned" with version 56260
I0814 08:24:00.865123  112882 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: bound to "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37"
I0814 08:24:00.865133  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned] status: set phase Bound
I0814 08:24:00.867163  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-controller-provisioned/status: (1.726439ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:00.867360  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned" with version 56261
I0814 08:24:00.867489  112882 pv_controller.go:742] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned" entered phase "Bound"
I0814 08:24:00.867608  112882 pv_controller.go:957] volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned"
I0814 08:24:00.867730  112882 pv_controller.go:958] volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned (uid: 4ad4c275-fbb0-4097-89eb-0e0e2c923a37)", boundByController: true
I0814 08:24:00.867841  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned" status after binding: phase: Bound, bound to: "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37", bindCompleted: true, boundByController: true
I0814 08:24:00.867973  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned" with version 56261
I0814 08:24:00.868053  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: phase: Bound, bound to: "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37", bindCompleted: true, boundByController: true
I0814 08:24:00.868116  112882 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" found: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned (uid: 4ad4c275-fbb0-4097-89eb-0e0e2c923a37)", boundByController: true
I0814 08:24:00.868168  112882 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: claim is already correctly bound
I0814 08:24:00.868203  112882 pv_controller.go:931] binding volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned"
I0814 08:24:00.868243  112882 pv_controller.go:829] updating PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned"
I0814 08:24:00.868291  112882 pv_controller.go:841] updating PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned"
I0814 08:24:00.868344  112882 pv_controller.go:777] updating PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: set phase Bound
I0814 08:24:00.868389  112882 pv_controller.go:780] updating PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: phase Bound already set
I0814 08:24:00.868469  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: binding to "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37"
I0814 08:24:00.868560  112882 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned]: already bound to "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37"
I0814 08:24:00.868636  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned] status: set phase Bound
I0814 08:24:00.868691  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned] status: phase Bound already set
I0814 08:24:00.868740  112882 pv_controller.go:957] volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned"
I0814 08:24:00.868805  112882 pv_controller.go:958] volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned (uid: 4ad4c275-fbb0-4097-89eb-0e0e2c923a37)", boundByController: true
I0814 08:24:00.868942  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned" status after binding: phase: Bound, bound to: "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37", bindCompleted: true, boundByController: true
I0814 08:24:00.951818  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.797996ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:01.051641  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.685405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:01.151645  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.828913ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:01.251852  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.654461ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:01.351555  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.718777ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:01.451671  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.699398ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:01.551865  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (2.053902ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:01.651701  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.879579ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:01.752054  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (2.101826ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:01.851787  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.968093ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:01.951715  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.816797ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.051943  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (2.051118ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.151539  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.688417ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.251787  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.870078ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.351477  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.650075ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.451445  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.621916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.551484  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.630084ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.576186  112882 scheduling_queue.go:830] About to try and schedule pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-unbound
I0814 08:24:02.576273  112882 scheduler.go:477] Attempting to schedule pod: volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-unbound
I0814 08:24:02.576436  112882 scheduler_binder.go:651] All bound volumes for Pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-unbound" match with Node "node-1"
I0814 08:24:02.576505  112882 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-unbound", node "node-1"
I0814 08:24:02.576516  112882 scheduler_binder.go:266] AssumePodVolumes for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-unbound", node "node-1": all PVCs bound and nothing to do
I0814 08:24:02.576568  112882 factory.go:615] Attempting to bind pod-i-unbound to node-1
I0814 08:24:02.579759  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound/binding: (2.721476ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.580014  112882 scheduler.go:614] pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-i-unbound is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0814 08:24:02.581949  112882 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.633866ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.651722  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-i-unbound: (1.711652ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.653714  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-controller-provisioned: (1.394536ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.659847  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods: (5.587004ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.664115  112882 pv_controller_base.go:258] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned" deleted
I0814 08:24:02.664130  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (3.934457ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.664318  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" with version 56257
I0814 08:24:02.664452  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned (uid: 4ad4c275-fbb0-4097-89eb-0e0e2c923a37)", boundByController: true
I0814 08:24:02.664510  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned
I0814 08:24:02.665781  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-controller-provisioned: (1.000892ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:02.666124  112882 pv_controller.go:547] synchronizing PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned not found
I0814 08:24:02.666147  112882 pv_controller.go:575] volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" is released and reclaim policy "Delete" will be executed
I0814 08:24:02.666159  112882 pv_controller.go:777] updating PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: set phase Released
I0814 08:24:02.668276  112882 httplog.go:90] DELETE /api/v1/persistentvolumes: (3.615557ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.669466  112882 store.go:349] GuaranteedUpdate of /603b5aaa-a10d-4b21-ab4b-ec546bbcb214/persistentvolumes/pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37 failed because of a conflict, going to retry
I0814 08:24:02.669857  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37/status: (3.251801ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:02.670137  112882 pv_controller.go:790] updating PersistentVolume[pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37": StorageError: invalid object, Code: 4, Key: /603b5aaa-a10d-4b21-ab4b-ec546bbcb214/persistentvolumes/pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 6ab0f9d0-2317-4242-9f74-9ad848b323c9, UID in object meta: 
I0814 08:24:02.670176  112882 pv_controller_base.go:202] could not sync volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37": Operation cannot be fulfilled on persistentvolumes "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37": StorageError: invalid object, Code: 4, Key: /603b5aaa-a10d-4b21-ab4b-ec546bbcb214/persistentvolumes/pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 6ab0f9d0-2317-4242-9f74-9ad848b323c9, UID in object meta: 
I0814 08:24:02.670218  112882 pv_controller_base.go:212] volume "pvc-4ad4c275-fbb0-4097-89eb-0e0e2c923a37" deleted
I0814 08:24:02.670260  112882 pv_controller_base.go:396] deletion of claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-controller-provisioned" was already processed
I0814 08:24:02.676077  112882 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.381221ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.676441  112882 volume_binding_test.go:751] Running test wait provisioned
I0814 08:24:02.677735  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.067693ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.679440  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.384418ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.681195  112882 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.199558ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.683187  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (1.494754ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.683629  112882 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision", version 56311
I0814 08:24:02.683669  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:24:02.683781  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: no volume found
I0814 08:24:02.683829  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: set phase Pending
I0814 08:24:02.683843  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: phase Pending already set
I0814 08:24:02.683870  112882 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e", Name:"pvc-canprovision", UID:"2c8ae674-aa11-47cc-a875-7fb9334d3998", APIVersion:"v1", ResourceVersion:"56311", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0814 08:24:02.685716  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods: (1.784016ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.685914  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.644741ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:02.686263  112882 scheduling_queue.go:830] About to try and schedule pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canprovision
I0814 08:24:02.686337  112882 scheduler.go:477] Attempting to schedule pod: volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canprovision
I0814 08:24:02.686553  112882 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canprovision", PVC "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" on node "node-1"
I0814 08:24:02.686644  112882 scheduler_binder.go:733] Provisioning for claims of pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canprovision" that has no matching volumes on node "node-1" ...
I0814 08:24:02.686744  112882 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canprovision", node "node-1"
I0814 08:24:02.686797  112882 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision", version 56311
I0814 08:24:02.686907  112882 scheduler_binder.go:331] BindPodVolumes for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canprovision", node "node-1"
I0814 08:24:02.689176  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.891049ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:02.689559  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56314
I0814 08:24:02.689659  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:24:02.689716  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: no volume found
I0814 08:24:02.689726  112882 pv_controller.go:1326] provisionClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: started
I0814 08:24:02.689745  112882 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision[2c8ae674-aa11-47cc-a875-7fb9334d3998]]
I0814 08:24:02.689806  112882 pv_controller.go:1372] provisionClaimOperation [volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] started, class: "wait-np4p"
I0814 08:24:02.691869  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.726881ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:02.692073  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56315
I0814 08:24:02.692077  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56315
I0814 08:24:02.692103  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:24:02.692124  112882 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: no volume found
I0814 08:24:02.692187  112882 pv_controller.go:1326] provisionClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: started
I0814 08:24:02.692226  112882 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision[2c8ae674-aa11-47cc-a875-7fb9334d3998]]
I0814 08:24:02.692234  112882 pv_controller.go:1642] operation "provision-volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision[2c8ae674-aa11-47cc-a875-7fb9334d3998]" is already running, skipping
I0814 08:24:02.693473  112882 httplog.go:90] GET /api/v1/persistentvolumes/pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998: (1.269333ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:02.693711  112882 pv_controller.go:1476] volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" created
I0814 08:24:02.693736  112882 pv_controller.go:1493] provisionClaimOperation [volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: trying to save volume pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998
I0814 08:24:02.695678  112882 httplog.go:90] POST /api/v1/persistentvolumes: (1.674545ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:02.695983  112882 pv_controller.go:1501] volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" saved
I0814 08:24:02.696096  112882 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998", version 56316
I0814 08:24:02.696185  112882 pv_controller.go:1554] volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" provisioned for claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:24:02.696253  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" with version 56316
I0814 08:24:02.696296  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 2c8ae674-aa11-47cc-a875-7fb9334d3998)", boundByController: true
I0814 08:24:02.696308  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:24:02.696321  112882 pv_controller.go:555] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:24:02.696333  112882 pv_controller.go:603] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: volume not bound yet, waiting for syncClaim to fix it
I0814 08:24:02.696359  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56315
I0814 08:24:02.696345  112882 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e", Name:"pvc-canprovision", UID:"2c8ae674-aa11-47cc-a875-7fb9334d3998", APIVersion:"v1", ResourceVersion:"56315", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998 using kubernetes.io/mock-provisioner
I0814 08:24:02.696375  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:24:02.696448  112882 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" found: phase: Pending, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 2c8ae674-aa11-47cc-a875-7fb9334d3998)", boundByController: true
I0814 08:24:02.696468  112882 pv_controller.go:931] binding volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:24:02.696478  112882 pv_controller.go:829] updating PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:24:02.696490  112882 pv_controller.go:841] updating PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:24:02.696497  112882 pv_controller.go:777] updating PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: set phase Bound
I0814 08:24:02.698315  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.53378ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:02.698807  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998/status: (2.038007ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:02.699140  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" with version 56318
I0814 08:24:02.699225  112882 pv_controller.go:798] volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" entered phase "Bound"
I0814 08:24:02.699307  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: binding to "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998"
I0814 08:24:02.699398  112882 pv_controller.go:901] volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:24:02.699532  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" with version 56318
I0814 08:24:02.699655  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 2c8ae674-aa11-47cc-a875-7fb9334d3998)", boundByController: true
I0814 08:24:02.699720  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:24:02.699772  112882 pv_controller.go:555] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0814 08:24:02.699821  112882 pv_controller.go:603] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: volume not bound yet, waiting for syncClaim to fix it
I0814 08:24:02.701794  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.883169ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:02.702022  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56319
I0814 08:24:02.702053  112882 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: bound to "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998"
I0814 08:24:02.702063  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: set phase Bound
I0814 08:24:02.704002  112882 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision/status: (1.749564ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:02.704383  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56320
I0814 08:24:02.704416  112882 pv_controller.go:742] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" entered phase "Bound"
I0814 08:24:02.704431  112882 pv_controller.go:957] volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:24:02.704449  112882 pv_controller.go:958] volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 2c8ae674-aa11-47cc-a875-7fb9334d3998)", boundByController: true
I0814 08:24:02.704538  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998", bindCompleted: true, boundByController: true
I0814 08:24:02.704630  112882 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" with version 56320
I0814 08:24:02.704645  112882 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: phase: Bound, bound to: "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998", bindCompleted: true, boundByController: true
I0814 08:24:02.704661  112882 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" found: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 2c8ae674-aa11-47cc-a875-7fb9334d3998)", boundByController: true
I0814 08:24:02.704669  112882 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: claim is already correctly bound
I0814 08:24:02.704679  112882 pv_controller.go:931] binding volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:24:02.704688  112882 pv_controller.go:829] updating PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: binding to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:24:02.704828  112882 pv_controller.go:841] updating PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: already bound to "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:24:02.704855  112882 pv_controller.go:777] updating PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: set phase Bound
I0814 08:24:02.704861  112882 pv_controller.go:780] updating PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: phase Bound already set
I0814 08:24:02.704869  112882 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: binding to "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998"
I0814 08:24:02.704887  112882 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision]: already bound to "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998"
I0814 08:24:02.704893  112882 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: set phase Bound
I0814 08:24:02.704911  112882 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision] status: phase Bound already set
I0814 08:24:02.704919  112882 pv_controller.go:957] volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" bound to claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision"
I0814 08:24:02.704933  112882 pv_controller.go:958] volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" status after binding: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 2c8ae674-aa11-47cc-a875-7fb9334d3998)", boundByController: true
I0814 08:24:02.704944  112882 pv_controller.go:959] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998", bindCompleted: true, boundByController: true
I0814 08:24:02.788511  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canprovision: (1.942215ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:02.889026  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canprovision: (2.308352ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:02.988671  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canprovision: (1.847051ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.088697  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canprovision: (1.934729ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.188524  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canprovision: (1.834188ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.288459  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canprovision: (1.815049ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.388511  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canprovision: (1.848915ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.486612  112882 httplog.go:90] GET /api/v1/namespaces/default: (2.070986ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.487665  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canprovision: (1.189604ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:03.488242  112882 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.261655ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.489871  112882 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.269628ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.576339  112882 cache.go:676] Couldn't expire cache for pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canprovision. Binding is still in progress.
I0814 08:24:03.588322  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canprovision: (1.717375ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.688472  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canprovision: (1.814748ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.689625  112882 scheduler_binder.go:545] All PVCs for pod "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canprovision" are bound
I0814 08:24:03.689670  112882 factory.go:615] Attempting to bind pod-pvc-canprovision to node-1
I0814 08:24:03.692189  112882 httplog.go:90] POST /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canprovision/binding: (2.032329ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.692427  112882 scheduler.go:614] pod volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-canprovision is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0814 08:24:03.694400  112882 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/events: (1.635363ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.788320  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods/pod-pvc-canprovision: (1.661132ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.790221  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.367011ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.796306  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods: (5.540418ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.800542  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (3.764076ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.801406  112882 pv_controller_base.go:258] claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" deleted
I0814 08:24:03.801450  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" with version 56318
I0814 08:24:03.801482  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: phase: Bound, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 2c8ae674-aa11-47cc-a875-7fb9334d3998)", boundByController: true
I0814 08:24:03.801494  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:24:03.802778  112882 httplog.go:90] GET /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims/pvc-canprovision: (1.114082ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:03.803000  112882 pv_controller.go:547] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision not found
I0814 08:24:03.803018  112882 pv_controller.go:575] volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" is released and reclaim policy "Delete" will be executed
I0814 08:24:03.803032  112882 pv_controller.go:777] updating PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: set phase Released
I0814 08:24:03.805152  112882 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998/status: (1.941985ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:03.805377  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" with version 56343
I0814 08:24:03.805435  112882 pv_controller.go:798] volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" entered phase "Released"
I0814 08:24:03.805447  112882 pv_controller.go:1022] reclaimVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: policy is Delete
I0814 08:24:03.805469  112882 pv_controller.go:1631] scheduleOperation[delete-pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998[31ad154a-458a-4d76-bd46-11c99ff8cf28]]
I0814 08:24:03.805504  112882 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" with version 56343
I0814 08:24:03.805527  112882 pv_controller.go:489] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: phase: Released, bound to: "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision (uid: 2c8ae674-aa11-47cc-a875-7fb9334d3998)", boundByController: true
I0814 08:24:03.805540  112882 pv_controller.go:514] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: volume is bound to claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision
I0814 08:24:03.805561  112882 pv_controller.go:547] synchronizing PersistentVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: claim volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision not found
I0814 08:24:03.805568  112882 pv_controller.go:1022] reclaimVolume[pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998]: policy is Delete
I0814 08:24:03.805639  112882 pv_controller.go:1631] scheduleOperation[delete-pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998[31ad154a-458a-4d76-bd46-11c99ff8cf28]]
I0814 08:24:03.805648  112882 pv_controller.go:1642] operation "delete-pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998[31ad154a-458a-4d76-bd46-11c99ff8cf28]" is already running, skipping
I0814 08:24:03.805649  112882 pv_controller.go:1146] deleteVolumeOperation [pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998] started
I0814 08:24:03.807317  112882 httplog.go:90] DELETE /api/v1/persistentvolumes: (5.642695ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.807446  112882 httplog.go:90] GET /api/v1/persistentvolumes/pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998: (1.420596ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43292]
I0814 08:24:03.807938  112882 pv_controller_base.go:212] volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" deleted
I0814 08:24:03.807943  112882 pv_controller.go:1153] error reading persistent volume "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998": persistentvolumes "pvc-2c8ae674-aa11-47cc-a875-7fb9334d3998" not found
I0814 08:24:03.807973  112882 pv_controller_base.go:396] deletion of claim "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-canprovision" was already processed
I0814 08:24:03.816788  112882 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.556981ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.817027  112882 volume_binding_test.go:932] test cluster "volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e" start to tear down
I0814 08:24:03.818685  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pods: (1.285093ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.820399  112882 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/persistentvolumeclaims: (1.247595ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.821992  112882 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.192602ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.823556  112882 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (1.054046ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
E0814 08:24:03.824242  112882 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0814 08:24:03.824275  112882 pv_controller_base.go:298] Shutting down persistent volume controller
I0814 08:24:03.825087  112882 pv_controller_base.go:409] claim worker queue shutting down
I0814 08:24:03.824401  112882 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=55754&timeout=5m43s&timeoutSeconds=343&watch=true: (10.245809504s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43190]
I0814 08:24:03.824432  112882 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=55755&timeout=5m52s&timeoutSeconds=352&watch=true: (10.247131327s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43188]
I0814 08:24:03.825273  112882 pv_controller_base.go:352] volume worker queue shutting down
I0814 08:24:03.824443  112882 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=55969&timeout=8m13s&timeoutSeconds=493&watch=true: (10.246101321s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0814 08:24:03.824487  112882 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=55754&timeout=9m18s&timeoutSeconds=558&watch=true: (9.040178626s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43222]
I0814 08:24:03.824498  112882 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=55754&timeout=5m36s&timeoutSeconds=336&watch=true: (9.042442351s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43220]
I0814 08:24:03.824513  112882 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=55754&timeout=9m33s&timeoutSeconds=573&watch=true: (9.040773478s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0814 08:24:03.824551  112882 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=55753&timeout=6m57s&timeoutSeconds=417&watch=true: (10.24568157s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43196]
I0814 08:24:03.824567  112882 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=55754&timeout=9m55s&timeoutSeconds=595&watch=true: (10.246217783s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43132]
I0814 08:24:03.824619  112882 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=55757&timeout=5m13s&timeoutSeconds=313&watch=true: (10.246309979s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43186]
I0814 08:24:03.824621  112882 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=55757&timeout=8m40s&timeoutSeconds=520&watch=true: (10.246812396s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43192]
I0814 08:24:03.824654  112882 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=55753&timeout=9m4s&timeoutSeconds=544&watch=true: (9.042745696s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43218]
I0814 08:24:03.824676  112882 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=55757&timeout=7m20s&timeoutSeconds=440&watch=true: (9.042081217s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0814 08:24:03.824717  112882 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=55754&timeout=5m11s&timeoutSeconds=311&watch=true: (10.24417476s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43198]
I0814 08:24:03.824727  112882 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=55757&timeout=6m53s&timeoutSeconds=413&watch=true: (10.242959808s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43200]
I0814 08:24:03.824777  112882 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=55754&timeout=7m34s&timeoutSeconds=454&watch=true: (10.248219529s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43182]
I0814 08:24:03.824794  112882 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=55756&timeout=5m24s&timeoutSeconds=324&watch=true: (10.246396156s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43194]
I0814 08:24:03.830468  112882 httplog.go:90] DELETE /api/v1/nodes: (5.727072ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.830737  112882 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0814 08:24:03.832283  112882 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.275316ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
I0814 08:24:03.834522  112882 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.799421ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43290]
W0814 08:24:03.835205  112882 feature_gate.go:208] Setting GA feature gate PersistentLocalVolumes=true. It will be removed in a future release.
I0814 08:24:03.835223  112882 feature_gate.go:216] feature gates: &{map[PersistentLocalVolumes:true]}
--- FAIL: TestVolumeProvision (13.79s)
    volume_binding_test.go:1149: Provisoning annotaion on PVC volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind not bahaviors as expected: PVC volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pvc-w-canbind not expected to be provisioned, but found selected-node annotation
    volume_binding_test.go:1191: PV pv-w-canbind phase not Bound, got Available

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-081413.xml

Find volume-scheduling8b314d9f-8e52-4528-8c3d-4a9efd81c20e/pod-pvc-topomismatch mentions in log files | View test history on testgrid


Show 2470 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 731 lines ...
W0814 08:09:43.079] W0814 08:09:42.862569   53274 controllermanager.go:527] Skipping "csrsigning"
W0814 08:09:43.079] I0814 08:09:42.863111   53274 controllermanager.go:535] Started "pvc-protection"
W0814 08:09:43.080] I0814 08:09:42.863188   53274 pvc_protection_controller.go:100] Starting PVC protection controller
W0814 08:09:43.080] I0814 08:09:42.863331   53274 controller_utils.go:1029] Waiting for caches to sync for PVC protection controller
W0814 08:09:43.080] I0814 08:09:42.870952   53274 controllermanager.go:535] Started "statefulset"
W0814 08:09:43.080] I0814 08:09:42.871550   53274 node_lifecycle_controller.go:77] Sending events to api server
W0814 08:09:43.081] E0814 08:09:42.871649   53274 core.go:175] failed to start cloud node lifecycle controller: no cloud provider provided
W0814 08:09:43.081] W0814 08:09:42.871680   53274 controllermanager.go:527] Skipping "cloud-node-lifecycle"
W0814 08:09:43.081] I0814 08:09:42.870998   53274 stateful_set.go:145] Starting stateful set controller
W0814 08:09:43.081] I0814 08:09:42.871885   53274 controller_utils.go:1029] Waiting for caches to sync for stateful set controller
W0814 08:09:43.081] I0814 08:09:42.872924   53274 controllermanager.go:535] Started "daemonset"
W0814 08:09:43.082] W0814 08:09:42.872944   53274 controllermanager.go:514] "bootstrapsigner" is disabled
W0814 08:09:43.082] I0814 08:09:42.873222   53274 controllermanager.go:535] Started "pv-protection"
... skipping 7 lines ...
W0814 08:09:43.083] I0814 08:09:42.874800   53274 controllermanager.go:535] Started "csrapproving"
W0814 08:09:43.084] I0814 08:09:42.874876   53274 certificate_controller.go:113] Starting certificate controller
W0814 08:09:43.084] I0814 08:09:42.874917   53274 controller_utils.go:1029] Waiting for caches to sync for certificate controller
W0814 08:09:43.084] I0814 08:09:42.875255   53274 controllermanager.go:535] Started "csrcleaner"
W0814 08:09:43.084] W0814 08:09:42.875308   53274 controllermanager.go:527] Skipping "nodeipam"
W0814 08:09:43.084] I0814 08:09:42.875456   53274 cleaner.go:81] Starting CSR cleaner controller
W0814 08:09:43.085] E0814 08:09:42.875860   53274 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0814 08:09:43.085] W0814 08:09:42.875919   53274 controllermanager.go:527] Skipping "service"
W0814 08:09:43.085] W0814 08:09:42.875936   53274 controllermanager.go:527] Skipping "ttl-after-finished"
W0814 08:09:43.085] I0814 08:09:42.876915   53274 serviceaccounts_controller.go:117] Starting service account controller
W0814 08:09:43.086] I0814 08:09:42.876960   53274 controller_utils.go:1029] Waiting for caches to sync for service account controller
W0814 08:09:43.086] I0814 08:09:42.878912   53274 controllermanager.go:535] Started "serviceaccount"
W0814 08:09:43.086] I0814 08:09:42.879438   53274 controllermanager.go:535] Started "job"
... skipping 49 lines ...
W0814 08:09:43.361] I0814 08:09:43.360692   53274 controllermanager.go:535] Started "deployment"
W0814 08:09:43.361] I0814 08:09:43.360707   53274 garbagecollector.go:129] Starting garbage collector controller
W0814 08:09:43.361] I0814 08:09:43.360747   53274 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0814 08:09:43.361] I0814 08:09:43.360775   53274 graph_builder.go:282] GraphBuilder running
W0814 08:09:43.362] I0814 08:09:43.361702   53274 deployment_controller.go:152] Starting deployment controller
W0814 08:09:43.362] I0814 08:09:43.361733   53274 controller_utils.go:1029] Waiting for caches to sync for deployment controller
W0814 08:09:43.400] W0814 08:09:43.400279   53274 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0814 08:09:43.454] I0814 08:09:43.454048   53274 controller_utils.go:1036] Caches are synced for namespace controller
W0814 08:09:43.457] I0814 08:09:43.457478   53274 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0814 08:09:43.461] I0814 08:09:43.460957   53274 controller_utils.go:1036] Caches are synced for TTL controller
W0814 08:09:43.474] I0814 08:09:43.474265   53274 controller_utils.go:1036] Caches are synced for GC controller
W0814 08:09:43.475] I0814 08:09:43.474273   53274 controller_utils.go:1036] Caches are synced for PV protection controller
W0814 08:09:43.477] E0814 08:09:43.477431   53274 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0814 08:09:43.483] E0814 08:09:43.483170   53274 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0814 08:09:43.484] I0814 08:09:43.483322   53274 controller_utils.go:1036] Caches are synced for taint controller
W0814 08:09:43.485] I0814 08:09:43.484134   53274 node_lifecycle_controller.go:1189] Initializing eviction metric for zone: 
W0814 08:09:43.485] I0814 08:09:43.484381   53274 node_lifecycle_controller.go:1039] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0814 08:09:43.486] I0814 08:09:43.484756   53274 taint_manager.go:186] Starting NoExecuteTaintManager
W0814 08:09:43.486] I0814 08:09:43.485194   53274 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"51162cb6-3dc8-4912-85a2-a8446401bceb", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W0814 08:09:43.562] I0814 08:09:43.561820   53274 controller_utils.go:1036] Caches are synced for HPA controller
... skipping 100 lines ...
I0814 08:09:46.824] +++ working dir: /go/src/k8s.io/kubernetes
I0814 08:09:46.827] +++ command: run_RESTMapper_evaluation_tests
I0814 08:09:46.837] +++ [0814 08:09:46] Creating namespace namespace-1565770186-31659
I0814 08:09:46.905] namespace/namespace-1565770186-31659 created
I0814 08:09:46.970] Context "test" modified.
I0814 08:09:46.975] +++ [0814 08:09:46] Testing RESTMapper
I0814 08:09:47.069] +++ [0814 08:09:47] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0814 08:09:47.080] +++ exit code: 0
I0814 08:09:47.186] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0814 08:09:47.187] bindings                                                                      true         Binding
I0814 08:09:47.187] componentstatuses                 cs                                          false        ComponentStatus
I0814 08:09:47.187] configmaps                        cm                                          true         ConfigMap
I0814 08:09:47.187] endpoints                         ep                                          true         Endpoints
... skipping 664 lines ...
I0814 08:10:05.099] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0814 08:10:05.181] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0814 08:10:05.248] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0814 08:10:05.332] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0814 08:10:05.473] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:10:05.652] (Bpod/env-test-pod created
W0814 08:10:05.753] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0814 08:10:05.753] error: setting 'all' parameter but found a non empty selector. 
W0814 08:10:05.754] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 08:10:05.754] I0814 08:10:04.796249   49808 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0814 08:10:05.754] error: min-available and max-unavailable cannot be both specified
I0814 08:10:05.854] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0814 08:10:05.855] Name:         env-test-pod
I0814 08:10:05.855] Namespace:    test-kubectl-describe-pod
I0814 08:10:05.855] Priority:     0
I0814 08:10:05.855] Node:         <none>
I0814 08:10:05.855] Labels:       <none>
... skipping 173 lines ...
I0814 08:10:18.642] (Bpod/valid-pod patched
I0814 08:10:18.728] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0814 08:10:18.798] (Bpod/valid-pod patched
I0814 08:10:18.887] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0814 08:10:19.032] (Bpod/valid-pod patched
I0814 08:10:19.121] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 08:10:19.280] (B+++ [0814 08:10:19] "kubectl patch with resourceVersion 495" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0814 08:10:19.489] pod "valid-pod" deleted
I0814 08:10:19.499] pod/valid-pod replaced
I0814 08:10:19.585] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0814 08:10:19.728] (BSuccessful
I0814 08:10:19.728] message:error: --grace-period must have --force specified
I0814 08:10:19.728] has:\-\-grace-period must have \-\-force specified
I0814 08:10:19.863] Successful
I0814 08:10:19.864] message:error: --timeout must have --force specified
I0814 08:10:19.864] has:\-\-timeout must have \-\-force specified
I0814 08:10:20.005] node/node-v1-test created
W0814 08:10:20.106] W0814 08:10:20.005454   53274 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0814 08:10:20.207] node/node-v1-test replaced
I0814 08:10:20.247] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0814 08:10:20.321] (Bnode "node-v1-test" deleted
I0814 08:10:20.408] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 08:10:20.653] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0814 08:10:21.543] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 66 lines ...
I0814 08:10:25.263] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:10:25.397] (Bpod/test-pod created
W0814 08:10:25.498] Edit cancelled, no changes made.
W0814 08:10:25.499] Edit cancelled, no changes made.
W0814 08:10:25.499] Edit cancelled, no changes made.
W0814 08:10:25.499] Edit cancelled, no changes made.
W0814 08:10:25.499] error: 'name' already has a value (valid-pod), and --overwrite is false
W0814 08:10:25.500] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 08:10:25.500] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 08:10:25.601] pod "test-pod" deleted
I0814 08:10:25.601] +++ [0814 08:10:25] Creating namespace namespace-1565770225-15688
I0814 08:10:25.635] namespace/namespace-1565770225-15688 created
I0814 08:10:25.701] Context "test" modified.
... skipping 41 lines ...
I0814 08:10:28.611] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0814 08:10:28.613] +++ working dir: /go/src/k8s.io/kubernetes
I0814 08:10:28.615] +++ command: run_kubectl_create_error_tests
I0814 08:10:28.624] +++ [0814 08:10:28] Creating namespace namespace-1565770228-5869
I0814 08:10:28.688] namespace/namespace-1565770228-5869 created
I0814 08:10:28.753] Context "test" modified.
I0814 08:10:28.758] +++ [0814 08:10:28] Testing kubectl create with error
W0814 08:10:28.859] Error: must specify one of -f and -k
W0814 08:10:28.859] 
W0814 08:10:28.859] Create a resource from a file or from stdin.
W0814 08:10:28.859] 
W0814 08:10:28.860]  JSON and YAML formats are accepted.
W0814 08:10:28.860] 
W0814 08:10:28.860] Examples:
... skipping 41 lines ...
W0814 08:10:28.867] 
W0814 08:10:28.867] Usage:
W0814 08:10:28.867]   kubectl create -f FILENAME [options]
W0814 08:10:28.867] 
W0814 08:10:28.867] Use "kubectl <command> --help" for more information about a given command.
W0814 08:10:28.867] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0814 08:10:28.968] +++ [0814 08:10:28] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0814 08:10:29.069] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 08:10:29.069] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 08:10:29.170] +++ exit code: 0
I0814 08:10:29.170] Recording: run_kubectl_apply_tests
I0814 08:10:29.171] Running command: run_kubectl_apply_tests
I0814 08:10:29.171] 
... skipping 19 lines ...
W0814 08:10:31.047] I0814 08:10:31.047258   49808 client.go:354] parsed scheme: ""
W0814 08:10:31.048] I0814 08:10:31.047291   49808 client.go:354] scheme "" not registered, fallback to default scheme
W0814 08:10:31.048] I0814 08:10:31.047322   49808 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0814 08:10:31.048] I0814 08:10:31.047378   49808 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 08:10:31.049] I0814 08:10:31.048631   49808 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 08:10:31.050] I0814 08:10:31.050319   49808 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0814 08:10:31.129] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0814 08:10:31.230] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0814 08:10:31.231] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 08:10:31.246] +++ exit code: 0
I0814 08:10:31.276] Recording: run_kubectl_run_tests
I0814 08:10:31.277] Running command: run_kubectl_run_tests
I0814 08:10:31.295] 
... skipping 84 lines ...
I0814 08:10:33.525] Context "test" modified.
I0814 08:10:33.530] +++ [0814 08:10:33] Testing kubectl create filter
I0814 08:10:33.611] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:10:33.764] (Bpod/selector-test-pod created
I0814 08:10:33.854] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0814 08:10:33.930] (BSuccessful
I0814 08:10:33.930] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0814 08:10:33.930] has:pods "selector-test-pod-dont-apply" not found
I0814 08:10:34.005] pod "selector-test-pod" deleted
I0814 08:10:34.020] +++ exit code: 0
I0814 08:10:34.048] Recording: run_kubectl_apply_deployments_tests
I0814 08:10:34.049] Running command: run_kubectl_apply_deployments_tests
I0814 08:10:34.065] 
... skipping 23 lines ...
I0814 08:10:35.599] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:10:35.680] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:10:35.761] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:10:35.902] (Bdeployment.apps/nginx created
I0814 08:10:35.993] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0814 08:10:40.197] (BSuccessful
I0814 08:10:40.198] message:Error from server (Conflict): error when applying patch:
I0814 08:10:40.198] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565770234-31689\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0814 08:10:40.198] to:
I0814 08:10:40.199] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0814 08:10:40.199] Name: "nginx", Namespace: "namespace-1565770234-31689"
I0814 08:10:40.202] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565770234-31689\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-08-14T08:10:35Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]] "k:{\"type\":\"Progressing\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-14T08:10:35Z"] map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map["f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-08-14T08:10:35Z"]] "name":"nginx" "namespace":"namespace-1565770234-31689" "resourceVersion":"584" "selfLink":"/apis/apps/v1/namespaces/namespace-1565770234-31689/deployments/nginx" "uid":"bce1eab0-c873-41b5-8a9f-7a8ada7e8ed4"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-08-14T08:10:35Z" "lastUpdateTime":"2019-08-14T08:10:35Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-08-14T08:10:35Z" "lastUpdateTime":"2019-08-14T08:10:35Z" "message":"ReplicaSet \"nginx-7dbc4d9f\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0814 08:10:40.202] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0814 08:10:40.202] has:Error from server (Conflict)
W0814 08:10:40.302] kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 08:10:40.303] I0814 08:10:31.616424   49808 controller.go:606] quota admission added evaluator for: jobs.batch
W0814 08:10:40.303] I0814 08:10:31.630113   53274 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565770231-25685", Name:"pi", UID:"2596dd0f-b8b6-48b1-bdf9-80f20eccaa6a", APIVersion:"batch/v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-k29kt
W0814 08:10:40.303] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 08:10:40.303] I0814 08:10:32.151274   49808 controller.go:606] quota admission added evaluator for: deployments.apps
W0814 08:10:40.304] I0814 08:10:32.173472   49808 controller.go:606] quota admission added evaluator for: replicasets.apps
... skipping 17 lines ...
I0814 08:10:45.512]           "name": "nginx2"
I0814 08:10:45.512] has:"name": "nginx2"
W0814 08:10:45.613] I0814 08:10:45.416377   53274 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565770234-31689", Name:"nginx", UID:"d5d71084-9c84-4c33-9aa5-bf8f54597124", APIVersion:"apps/v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0814 08:10:45.614] I0814 08:10:45.420476   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770234-31689", Name:"nginx-594f77b9f6", UID:"225ba3f9-053b-4b73-a049-60fd67e675cb", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-9clbq
W0814 08:10:45.614] I0814 08:10:45.427833   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770234-31689", Name:"nginx-594f77b9f6", UID:"225ba3f9-053b-4b73-a049-60fd67e675cb", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-llvkf
W0814 08:10:45.614] I0814 08:10:45.438000   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770234-31689", Name:"nginx-594f77b9f6", UID:"225ba3f9-053b-4b73-a049-60fd67e675cb", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-z8dvl
W0814 08:10:49.755] E0814 08:10:49.754549   53274 replica_set.go:450] Sync "namespace-1565770234-31689/nginx-594f77b9f6" failed with replicasets.apps "nginx-594f77b9f6" not found
W0814 08:10:50.746] I0814 08:10:50.745828   53274 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565770234-31689", Name:"nginx", UID:"533e6b0f-04f5-46e1-a351-391f9a2eb741", APIVersion:"apps/v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0814 08:10:50.752] I0814 08:10:50.751716   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770234-31689", Name:"nginx-594f77b9f6", UID:"811685ad-381e-4397-b410-22347c4dcdf5", APIVersion:"apps/v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-hlpnv
W0814 08:10:50.756] I0814 08:10:50.755756   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770234-31689", Name:"nginx-594f77b9f6", UID:"811685ad-381e-4397-b410-22347c4dcdf5", APIVersion:"apps/v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-69vdx
W0814 08:10:50.757] I0814 08:10:50.757159   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770234-31689", Name:"nginx-594f77b9f6", UID:"811685ad-381e-4397-b410-22347c4dcdf5", APIVersion:"apps/v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-q74cq
I0814 08:10:50.858] Successful
I0814 08:10:50.859] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 159 lines ...
I0814 08:10:52.480] +++ [0814 08:10:52] Creating namespace namespace-1565770252-11912
I0814 08:10:52.556] namespace/namespace-1565770252-11912 created
I0814 08:10:52.623] Context "test" modified.
I0814 08:10:52.628] +++ [0814 08:10:52] Testing kubectl get
I0814 08:10:52.714] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:10:52.797] (BSuccessful
I0814 08:10:52.798] message:Error from server (NotFound): pods "abc" not found
I0814 08:10:52.798] has:pods "abc" not found
I0814 08:10:52.884] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:10:52.969] (BSuccessful
I0814 08:10:52.969] message:Error from server (NotFound): pods "abc" not found
I0814 08:10:52.969] has:pods "abc" not found
I0814 08:10:53.057] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:10:53.139] (BSuccessful
I0814 08:10:53.139] message:{
I0814 08:10:53.139]     "apiVersion": "v1",
I0814 08:10:53.139]     "items": [],
... skipping 23 lines ...
I0814 08:10:53.456] has not:No resources found
I0814 08:10:53.531] Successful
I0814 08:10:53.532] message:NAME
I0814 08:10:53.532] has not:No resources found
I0814 08:10:53.614] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:10:53.710] (BSuccessful
I0814 08:10:53.710] message:error: the server doesn't have a resource type "foobar"
I0814 08:10:53.710] has not:No resources found
I0814 08:10:53.796] Successful
I0814 08:10:53.797] message:No resources found in namespace-1565770252-11912 namespace.
I0814 08:10:53.797] has:No resources found
I0814 08:10:53.876] Successful
I0814 08:10:53.876] message:
I0814 08:10:53.876] has not:No resources found
I0814 08:10:53.954] Successful
I0814 08:10:53.954] message:No resources found in namespace-1565770252-11912 namespace.
I0814 08:10:53.954] has:No resources found
I0814 08:10:54.037] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:10:54.118] (BSuccessful
I0814 08:10:54.119] message:Error from server (NotFound): pods "abc" not found
I0814 08:10:54.119] has:pods "abc" not found
I0814 08:10:54.120] FAIL!
I0814 08:10:54.120] message:Error from server (NotFound): pods "abc" not found
I0814 08:10:54.120] has not:List
I0814 08:10:54.121] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0814 08:10:54.228] Successful
I0814 08:10:54.228] message:I0814 08:10:54.186476   63821 loader.go:375] Config loaded from file:  /tmp/tmp.JYO2Lm4LU5/.kube/config
I0814 08:10:54.229] I0814 08:10:54.187846   63821 round_trippers.go:471] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
I0814 08:10:54.229] I0814 08:10:54.207403   63821 round_trippers.go:471] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
... skipping 660 lines ...
I0814 08:10:59.730] Successful
I0814 08:10:59.731] message:NAME    DATA   AGE
I0814 08:10:59.731] one     0      0s
I0814 08:10:59.731] three   0      0s
I0814 08:10:59.732] two     0      0s
I0814 08:10:59.732] STATUS    REASON          MESSAGE
I0814 08:10:59.732] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 08:10:59.732] has not:watch is only supported on individual resources
I0814 08:11:00.813] Successful
I0814 08:11:00.813] message:STATUS    REASON          MESSAGE
I0814 08:11:00.814] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 08:11:00.814] has not:watch is only supported on individual resources
I0814 08:11:00.818] +++ [0814 08:11:00] Creating namespace namespace-1565770260-3786
I0814 08:11:00.889] namespace/namespace-1565770260-3786 created
I0814 08:11:00.956] Context "test" modified.
I0814 08:11:01.038] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:11:01.196] (Bpod/valid-pod created
... skipping 104 lines ...
I0814 08:11:01.291] }
I0814 08:11:01.373] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 08:11:01.606] (B<no value>Successful
I0814 08:11:01.607] message:valid-pod:
I0814 08:11:01.607] has:valid-pod:
I0814 08:11:01.698] Successful
I0814 08:11:01.698] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0814 08:11:01.699] 	template was:
I0814 08:11:01.699] 		{.missing}
I0814 08:11:01.699] 	object given to jsonpath engine was:
I0814 08:11:01.701] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-08-14T08:11:01Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-08-14T08:11:01Z"}}, "name":"valid-pod", "namespace":"namespace-1565770260-3786", "resourceVersion":"685", "selfLink":"/api/v1/namespaces/namespace-1565770260-3786/pods/valid-pod", "uid":"b0a2a98b-930d-4f4a-94f0-748945a8d8ab"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0814 08:11:01.701] has:missing is not found
I0814 08:11:01.778] Successful
I0814 08:11:01.778] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0814 08:11:01.778] 	template was:
I0814 08:11:01.778] 		{{.missing}}
I0814 08:11:01.778] 	raw data was:
I0814 08:11:01.780] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-08-14T08:11:01Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-08-14T08:11:01Z"}],"name":"valid-pod","namespace":"namespace-1565770260-3786","resourceVersion":"685","selfLink":"/api/v1/namespaces/namespace-1565770260-3786/pods/valid-pod","uid":"b0a2a98b-930d-4f4a-94f0-748945a8d8ab"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0814 08:11:01.780] 	object given to template engine was:
I0814 08:11:01.781] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-08-14T08:11:01Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-08-14T08:11:01Z]] name:valid-pod namespace:namespace-1565770260-3786 resourceVersion:685 selfLink:/api/v1/namespaces/namespace-1565770260-3786/pods/valid-pod uid:b0a2a98b-930d-4f4a-94f0-748945a8d8ab] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0814 08:11:01.781] has:map has no entry for key "missing"
W0814 08:11:01.882] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0814 08:11:02.861] Successful
I0814 08:11:02.861] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 08:11:02.862] valid-pod   0/1     Pending   0          0s
I0814 08:11:02.862] STATUS      REASON          MESSAGE
I0814 08:11:02.862] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 08:11:02.862] has:STATUS
I0814 08:11:02.863] Successful
I0814 08:11:02.863] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 08:11:02.864] valid-pod   0/1     Pending   0          0s
I0814 08:11:02.864] STATUS      REASON          MESSAGE
I0814 08:11:02.864] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 08:11:02.864] has:valid-pod
I0814 08:11:03.943] Successful
I0814 08:11:03.943] message:pod/valid-pod
I0814 08:11:03.943] has not:STATUS
I0814 08:11:03.945] Successful
I0814 08:11:03.945] message:pod/valid-pod
... skipping 144 lines ...
I0814 08:11:05.042] status:
I0814 08:11:05.042]   phase: Pending
I0814 08:11:05.042]   qosClass: Guaranteed
I0814 08:11:05.042] ---
I0814 08:11:05.042] has:name: valid-pod
I0814 08:11:05.112] Successful
I0814 08:11:05.113] message:Error from server (NotFound): pods "invalid-pod" not found
I0814 08:11:05.113] has:"invalid-pod" not found
I0814 08:11:05.189] pod "valid-pod" deleted
I0814 08:11:05.277] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:11:05.421] (Bpod/redis-master created
I0814 08:11:05.426] pod/valid-pod created
I0814 08:11:05.516] Successful
... skipping 35 lines ...
I0814 08:11:06.556] +++ command: run_kubectl_exec_pod_tests
I0814 08:11:06.566] +++ [0814 08:11:06] Creating namespace namespace-1565770266-24886
I0814 08:11:06.634] namespace/namespace-1565770266-24886 created
I0814 08:11:06.704] Context "test" modified.
I0814 08:11:06.710] +++ [0814 08:11:06] Testing kubectl exec POD COMMAND
I0814 08:11:06.786] Successful
I0814 08:11:06.787] message:Error from server (NotFound): pods "abc" not found
I0814 08:11:06.787] has:pods "abc" not found
I0814 08:11:06.936] pod/test-pod created
I0814 08:11:07.030] Successful
I0814 08:11:07.031] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 08:11:07.031] has not:pods "test-pod" not found
I0814 08:11:07.032] Successful
I0814 08:11:07.032] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 08:11:07.032] has not:pod or type/name must be specified
I0814 08:11:07.107] pod "test-pod" deleted
I0814 08:11:07.123] +++ exit code: 0
I0814 08:11:07.152] Recording: run_kubectl_exec_resource_name_tests
I0814 08:11:07.152] Running command: run_kubectl_exec_resource_name_tests
I0814 08:11:07.170] 
... skipping 2 lines ...
I0814 08:11:07.176] +++ command: run_kubectl_exec_resource_name_tests
I0814 08:11:07.187] +++ [0814 08:11:07] Creating namespace namespace-1565770267-31296
I0814 08:11:07.256] namespace/namespace-1565770267-31296 created
I0814 08:11:07.323] Context "test" modified.
I0814 08:11:07.328] +++ [0814 08:11:07] Testing kubectl exec TYPE/NAME COMMAND
I0814 08:11:07.416] Successful
I0814 08:11:07.417] message:error: the server doesn't have a resource type "foo"
I0814 08:11:07.417] has:error:
I0814 08:11:07.493] Successful
I0814 08:11:07.494] message:Error from server (NotFound): deployments.apps "bar" not found
I0814 08:11:07.494] has:"bar" not found
I0814 08:11:07.634] pod/test-pod created
I0814 08:11:07.775] replicaset.apps/frontend created
W0814 08:11:07.876] I0814 08:11:07.780290   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770267-31296", Name:"frontend", UID:"ac323549-cabd-4cac-adc3-372710994332", APIVersion:"apps/v1", ResourceVersion:"738", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mdhsl
W0814 08:11:07.877] I0814 08:11:07.784087   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770267-31296", Name:"frontend", UID:"ac323549-cabd-4cac-adc3-372710994332", APIVersion:"apps/v1", ResourceVersion:"738", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-v4lp5
W0814 08:11:07.877] I0814 08:11:07.784337   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770267-31296", Name:"frontend", UID:"ac323549-cabd-4cac-adc3-372710994332", APIVersion:"apps/v1", ResourceVersion:"738", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rbsgz
I0814 08:11:07.978] configmap/test-set-env-config created
I0814 08:11:07.994] Successful
I0814 08:11:07.994] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0814 08:11:07.995] has:not implemented
I0814 08:11:08.078] Successful
I0814 08:11:08.078] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 08:11:08.078] has not:not found
I0814 08:11:08.079] Successful
I0814 08:11:08.079] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 08:11:08.080] has not:pod or type/name must be specified
I0814 08:11:08.174] Successful
I0814 08:11:08.174] message:Error from server (BadRequest): pod frontend-mdhsl does not have a host assigned
I0814 08:11:08.174] has not:not found
I0814 08:11:08.175] Successful
I0814 08:11:08.175] message:Error from server (BadRequest): pod frontend-mdhsl does not have a host assigned
I0814 08:11:08.176] has not:pod or type/name must be specified
I0814 08:11:08.251] pod "test-pod" deleted
I0814 08:11:08.329] replicaset.apps "frontend" deleted
I0814 08:11:08.407] configmap "test-set-env-config" deleted
I0814 08:11:08.423] +++ exit code: 0
I0814 08:11:08.452] Recording: run_create_secret_tests
I0814 08:11:08.453] Running command: run_create_secret_tests
I0814 08:11:08.470] 
I0814 08:11:08.472] +++ Running case: test-cmd.run_create_secret_tests 
I0814 08:11:08.474] +++ working dir: /go/src/k8s.io/kubernetes
I0814 08:11:08.476] +++ command: run_create_secret_tests
I0814 08:11:08.560] Successful
I0814 08:11:08.560] message:Error from server (NotFound): secrets "mysecret" not found
I0814 08:11:08.560] has:secrets "mysecret" not found
I0814 08:11:08.705] Successful
I0814 08:11:08.706] message:Error from server (NotFound): secrets "mysecret" not found
I0814 08:11:08.706] has:secrets "mysecret" not found
I0814 08:11:08.707] Successful
I0814 08:11:08.708] message:user-specified
I0814 08:11:08.708] has:user-specified
I0814 08:11:08.776] Successful
I0814 08:11:08.851] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"54d3d32d-bf26-4f1b-ba7b-d3caf7702623","resourceVersion":"758","creationTimestamp":"2019-08-14T08:11:08Z"}}
... skipping 2 lines ...
I0814 08:11:09.003] has:uid
I0814 08:11:09.073] Successful
I0814 08:11:09.074] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"54d3d32d-bf26-4f1b-ba7b-d3caf7702623","resourceVersion":"760","creationTimestamp":"2019-08-14T08:11:08Z","managedFields":[{"manager":"kubectl","operation":"Update","apiVersion":"v1","time":"2019-08-14T08:11:08Z","fields":{"f:data":{"f:key1":{},".":{}}}}]},"data":{"key1":"config1"}}
I0814 08:11:09.074] has:config1
I0814 08:11:09.140] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"54d3d32d-bf26-4f1b-ba7b-d3caf7702623"}}
I0814 08:11:09.221] Successful
I0814 08:11:09.221] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0814 08:11:09.222] has:configmaps "tester-update-cm" not found
I0814 08:11:09.234] +++ exit code: 0
I0814 08:11:09.269] Recording: run_kubectl_create_kustomization_directory_tests
I0814 08:11:09.269] Running command: run_kubectl_create_kustomization_directory_tests
I0814 08:11:09.286] 
I0814 08:11:09.289] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 158 lines ...
W0814 08:11:11.834] I0814 08:11:09.736294   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770267-31296", Name:"test-the-deployment-55cf944b", UID:"ed0c1d3a-2b5f-4501-986c-7b21771deb4d", APIVersion:"apps/v1", ResourceVersion:"768", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-55cf944b-kg5rm
W0814 08:11:11.834] I0814 08:11:09.736757   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770267-31296", Name:"test-the-deployment-55cf944b", UID:"ed0c1d3a-2b5f-4501-986c-7b21771deb4d", APIVersion:"apps/v1", ResourceVersion:"768", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-55cf944b-jzs92
I0814 08:11:12.813] Successful
I0814 08:11:12.814] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 08:11:12.814] valid-pod   0/1     Pending   0          0s
I0814 08:11:12.814] STATUS      REASON          MESSAGE
I0814 08:11:12.815] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 08:11:12.815] has:Timeout exceeded while reading body
I0814 08:11:12.898] Successful
I0814 08:11:12.899] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 08:11:12.899] valid-pod   0/1     Pending   0          1s
I0814 08:11:12.899] has:valid-pod
I0814 08:11:12.972] Successful
I0814 08:11:12.972] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0814 08:11:12.972] has:Invalid timeout value
I0814 08:11:13.050] pod "valid-pod" deleted
I0814 08:11:13.068] +++ exit code: 0
I0814 08:11:13.100] Recording: run_crd_tests
I0814 08:11:13.101] Running command: run_crd_tests
I0814 08:11:13.129] 
... skipping 229 lines ...
I0814 08:11:17.371] foo.company.com/test patched
I0814 08:11:17.460] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0814 08:11:17.539] (Bfoo.company.com/test patched
I0814 08:11:17.623] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0814 08:11:17.701] (Bfoo.company.com/test patched
I0814 08:11:17.787] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0814 08:11:17.933] (B+++ [0814 08:11:17] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0814 08:11:17.997] {
I0814 08:11:17.997]     "apiVersion": "company.com/v1",
I0814 08:11:17.997]     "kind": "Foo",
I0814 08:11:17.997]     "metadata": {
I0814 08:11:17.997]         "annotations": {
I0814 08:11:17.998]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 369 lines ...
I0814 08:11:26.449] bar.company.com/test created
I0814 08:11:26.557] crd.sh:455: Successful get bars {{len .items}}: 1
I0814 08:11:26.641] (Bnamespace "non-native-resources" deleted
I0814 08:11:31.865] crd.sh:458: Successful get bars {{len .items}}: 0
I0814 08:11:32.037] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I0814 08:11:32.136] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
W0814 08:11:32.236] Error from server (NotFound): namespaces "non-native-resources" not found
I0814 08:11:32.337] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 08:11:32.356] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0814 08:11:32.389] +++ exit code: 0
I0814 08:11:32.426] Recording: run_cmd_with_img_tests
I0814 08:11:32.426] Running command: run_cmd_with_img_tests
I0814 08:11:32.447] 
... skipping 6 lines ...
I0814 08:11:32.614] +++ [0814 08:11:32] Testing cmd with image
I0814 08:11:32.701] Successful
I0814 08:11:32.702] message:deployment.apps/test1 created
I0814 08:11:32.702] has:deployment.apps/test1 created
I0814 08:11:32.777] deployment.apps "test1" deleted
I0814 08:11:32.855] Successful
I0814 08:11:32.855] message:error: Invalid image name "InvalidImageName": invalid reference format
I0814 08:11:32.856] has:error: Invalid image name "InvalidImageName": invalid reference format
I0814 08:11:32.869] +++ exit code: 0
I0814 08:11:32.904] +++ [0814 08:11:32] Testing recursive resources
I0814 08:11:32.909] +++ [0814 08:11:32] Creating namespace namespace-1565770292-8284
I0814 08:11:32.981] namespace/namespace-1565770292-8284 created
I0814 08:11:33.050] Context "test" modified.
I0814 08:11:33.144] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:11:33.444] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:33.446] (BSuccessful
I0814 08:11:33.446] message:pod/busybox0 created
I0814 08:11:33.446] pod/busybox1 created
I0814 08:11:33.446] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 08:11:33.446] has:error validating data: kind not set
I0814 08:11:33.533] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:33.698] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0814 08:11:33.701] (BSuccessful
I0814 08:11:33.701] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 08:11:33.702] has:Object 'Kind' is missing
I0814 08:11:33.791] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:34.082] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 08:11:34.084] (BSuccessful
I0814 08:11:34.085] message:pod/busybox0 replaced
I0814 08:11:34.085] pod/busybox1 replaced
I0814 08:11:34.085] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 08:11:34.085] has:error validating data: kind not set
I0814 08:11:34.177] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:34.271] (BSuccessful
I0814 08:11:34.272] message:Name:         busybox0
I0814 08:11:34.272] Namespace:    namespace-1565770292-8284
I0814 08:11:34.272] Priority:     0
I0814 08:11:34.272] Node:         <none>
... skipping 159 lines ...
I0814 08:11:34.285] has:Object 'Kind' is missing
I0814 08:11:34.370] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:34.542] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0814 08:11:34.544] (BSuccessful
I0814 08:11:34.545] message:pod/busybox0 annotated
I0814 08:11:34.545] pod/busybox1 annotated
I0814 08:11:34.545] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 08:11:34.545] has:Object 'Kind' is missing
I0814 08:11:34.640] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:34.899] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 08:11:34.902] (BSuccessful
I0814 08:11:34.902] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 08:11:34.902] pod/busybox0 configured
I0814 08:11:34.902] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 08:11:34.902] pod/busybox1 configured
I0814 08:11:34.903] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 08:11:34.903] has:error validating data: kind not set
I0814 08:11:34.979] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:11:35.117] (Bdeployment.apps/nginx created
I0814 08:11:35.212] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 08:11:35.296] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0814 08:11:35.454] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I0814 08:11:35.456] (BSuccessful
... skipping 42 lines ...
I0814 08:11:35.525] deployment.apps "nginx" deleted
I0814 08:11:35.614] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:35.764] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:35.765] (BSuccessful
I0814 08:11:35.766] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0814 08:11:35.766] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 08:11:35.766] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 08:11:35.767] has:Object 'Kind' is missing
I0814 08:11:35.845] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:35.919] (BSuccessful
I0814 08:11:35.919] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 08:11:35.920] has:busybox0:busybox1:
I0814 08:11:35.921] Successful
I0814 08:11:35.921] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 08:11:35.921] has:Object 'Kind' is missing
I0814 08:11:36.001] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:36.083] (Bpod/busybox0 labeled
I0814 08:11:36.084] pod/busybox1 labeled
I0814 08:11:36.085] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 08:11:36.161] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0814 08:11:36.163] (BSuccessful
I0814 08:11:36.163] message:pod/busybox0 labeled
I0814 08:11:36.163] pod/busybox1 labeled
I0814 08:11:36.163] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 08:11:36.163] has:Object 'Kind' is missing
I0814 08:11:36.245] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:36.325] (Bpod/busybox0 patched
I0814 08:11:36.325] pod/busybox1 patched
I0814 08:11:36.326] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 08:11:36.412] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0814 08:11:36.414] (BSuccessful
I0814 08:11:36.414] message:pod/busybox0 patched
I0814 08:11:36.415] pod/busybox1 patched
I0814 08:11:36.415] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 08:11:36.415] has:Object 'Kind' is missing
I0814 08:11:36.495] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:36.654] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:11:36.656] (BSuccessful
I0814 08:11:36.657] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 08:11:36.657] pod "busybox0" force deleted
I0814 08:11:36.657] pod "busybox1" force deleted
I0814 08:11:36.658] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 08:11:36.658] has:Object 'Kind' is missing
I0814 08:11:36.741] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:11:36.890] (Breplicationcontroller/busybox0 created
I0814 08:11:36.898] replicationcontroller/busybox1 created
I0814 08:11:36.998] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:37.088] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:37.172] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 08:11:37.250] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 08:11:37.403] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 08:11:37.482] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 08:11:37.485] (BSuccessful
I0814 08:11:37.485] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0814 08:11:37.485] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0814 08:11:37.486] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 08:11:37.486] has:Object 'Kind' is missing
I0814 08:11:37.559] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0814 08:11:37.630] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0814 08:11:37.715] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:37.795] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 08:11:37.872] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 08:11:38.038] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 08:11:38.125] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 08:11:38.126] (BSuccessful
I0814 08:11:38.126] message:service/busybox0 exposed
I0814 08:11:38.127] service/busybox1 exposed
I0814 08:11:38.127] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 08:11:38.128] has:Object 'Kind' is missing
I0814 08:11:38.209] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:38.290] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 08:11:38.369] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 08:11:38.546] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0814 08:11:38.624] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0814 08:11:38.626] (BSuccessful
I0814 08:11:38.626] message:replicationcontroller/busybox0 scaled
I0814 08:11:38.626] replicationcontroller/busybox1 scaled
I0814 08:11:38.627] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 08:11:38.627] has:Object 'Kind' is missing
I0814 08:11:38.707] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:38.871] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:11:38.873] (BSuccessful
I0814 08:11:38.873] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 08:11:38.873] replicationcontroller "busybox0" force deleted
I0814 08:11:38.873] replicationcontroller "busybox1" force deleted
I0814 08:11:38.874] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 08:11:38.874] has:Object 'Kind' is missing
I0814 08:11:38.952] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:11:39.097] (Bdeployment.apps/nginx1-deployment created
I0814 08:11:39.105] deployment.apps/nginx0-deployment created
W0814 08:11:39.206] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 08:11:39.206] I0814 08:11:32.693265   53274 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565770292-13725", Name:"test1", UID:"1df2d72c-6d99-4823-9dc9-dd4770da764a", APIVersion:"apps/v1", ResourceVersion:"903", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-9797f89d8 to 1
W0814 08:11:39.207] I0814 08:11:32.699515   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770292-13725", Name:"test1-9797f89d8", UID:"dfb8757b-ee90-4c0b-8f3f-47801b4dc460", APIVersion:"apps/v1", ResourceVersion:"904", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-9797f89d8-62cjv
W0814 08:11:39.207] W0814 08:11:33.048248   49808 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 08:11:39.207] E0814 08:11:33.049803   53274 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.208] W0814 08:11:33.149068   49808 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 08:11:39.208] E0814 08:11:33.150514   53274 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.208] W0814 08:11:33.249929   49808 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 08:11:39.209] E0814 08:11:33.251732   53274 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.209] W0814 08:11:33.369025   49808 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 08:11:39.209] E0814 08:11:33.374149   53274 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.209] E0814 08:11:34.051449   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.210] E0814 08:11:34.151876   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.210] E0814 08:11:34.252916   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.210] E0814 08:11:34.375843   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.211] E0814 08:11:35.053154   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.211] I0814 08:11:35.121950   53274 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565770292-8284", Name:"nginx", UID:"6a1f410e-e102-4674-a746-5ee918c8f992", APIVersion:"apps/v1", ResourceVersion:"929", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0814 08:11:39.211] I0814 08:11:35.125483   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770292-8284", Name:"nginx-bbbbb95b5", UID:"4ac433a7-82d5-4106-8750-955967913447", APIVersion:"apps/v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-mkjdh
W0814 08:11:39.212] I0814 08:11:35.128767   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770292-8284", Name:"nginx-bbbbb95b5", UID:"4ac433a7-82d5-4106-8750-955967913447", APIVersion:"apps/v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-shzp8
W0814 08:11:39.212] I0814 08:11:35.129903   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770292-8284", Name:"nginx-bbbbb95b5", UID:"4ac433a7-82d5-4106-8750-955967913447", APIVersion:"apps/v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-fczlj
W0814 08:11:39.212] E0814 08:11:35.153178   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.212] E0814 08:11:35.254350   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.213] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 08:11:39.213] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0814 08:11:39.213] E0814 08:11:35.377302   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.213] E0814 08:11:36.054634   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.213] E0814 08:11:36.154140   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.214] E0814 08:11:36.255872   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.214] E0814 08:11:36.378500   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.214] I0814 08:11:36.894085   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565770292-8284", Name:"busybox0", UID:"0c476aeb-e704-45f0-b8af-6466fee22da0", APIVersion:"v1", ResourceVersion:"959", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-9wskc
W0814 08:11:39.214] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 08:11:39.215] I0814 08:11:36.902495   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565770292-8284", Name:"busybox1", UID:"4e8a9450-f2cc-4c4c-b04a-1fca9b18f422", APIVersion:"v1", ResourceVersion:"961", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-pzrjj
W0814 08:11:39.215] E0814 08:11:37.056000   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.215] E0814 08:11:37.155322   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.215] E0814 08:11:37.257266   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.216] E0814 08:11:37.379682   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.216] E0814 08:11:38.057291   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.216] E0814 08:11:38.156588   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.216] E0814 08:11:38.258670   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.216] E0814 08:11:38.382840   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.217] I0814 08:11:38.453010   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565770292-8284", Name:"busybox0", UID:"0c476aeb-e704-45f0-b8af-6466fee22da0", APIVersion:"v1", ResourceVersion:"981", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-9w7f4
W0814 08:11:39.217] I0814 08:11:38.464179   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565770292-8284", Name:"busybox1", UID:"4e8a9450-f2cc-4c4c-b04a-1fca9b18f422", APIVersion:"v1", ResourceVersion:"985", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-fwtft
W0814 08:11:39.217] E0814 08:11:39.058833   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.218] I0814 08:11:39.103335   53274 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565770292-8284", Name:"nginx1-deployment", UID:"4898f9aa-6706-4aca-aefa-4d7796e9898e", APIVersion:"apps/v1", ResourceVersion:"1002", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-84f7f49fb7 to 2
W0814 08:11:39.218] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 08:11:39.218] I0814 08:11:39.108795   53274 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565770292-8284", Name:"nginx0-deployment", UID:"7db9b112-c505-4dbb-81d6-7ccb2e903351", APIVersion:"apps/v1", ResourceVersion:"1003", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57475bf54d to 2
W0814 08:11:39.219] I0814 08:11:39.110915   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770292-8284", Name:"nginx1-deployment-84f7f49fb7", UID:"8461f0e2-ac26-4d1c-a731-080a0899b568", APIVersion:"apps/v1", ResourceVersion:"1004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-v9nff
W0814 08:11:39.219] I0814 08:11:39.111875   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770292-8284", Name:"nginx0-deployment-57475bf54d", UID:"94f07a5d-3d3e-45b8-8d4a-f071c2974bf1", APIVersion:"apps/v1", ResourceVersion:"1006", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-v6c5p
W0814 08:11:39.219] I0814 08:11:39.114038   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770292-8284", Name:"nginx1-deployment-84f7f49fb7", UID:"8461f0e2-ac26-4d1c-a731-080a0899b568", APIVersion:"apps/v1", ResourceVersion:"1004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-j2qrv
W0814 08:11:39.220] I0814 08:11:39.116464   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565770292-8284", Name:"nginx0-deployment-57475bf54d", UID:"94f07a5d-3d3e-45b8-8d4a-f071c2974bf1", APIVersion:"apps/v1", ResourceVersion:"1006", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-pjzjt
W0814 08:11:39.220] E0814 08:11:39.157757   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:39.261] E0814 08:11:39.260294   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 08:11:39.361] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0814 08:11:39.362] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 08:11:39.485] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 08:11:39.487] (BSuccessful
I0814 08:11:39.487] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0814 08:11:39.488] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0814 08:11:39.488] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 08:11:39.488] has:Object 'Kind' is missing
I0814 08:11:39.573] deployment.apps/nginx1-deployment paused
I0814 08:11:39.581] deployment.apps/nginx0-deployment paused
I0814 08:11:39.675] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0814 08:11:39.676] (BSuccessful
I0814 08:11:39.677] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0814 08:11:39.961] 1         <none>
I0814 08:11:39.961] 
I0814 08:11:39.961] deployment.apps/nginx0-deployment 
I0814 08:11:39.962] REVISION  CHANGE-CAUSE
I0814 08:11:39.962] 1         <none>
I0814 08:11:39.962] 
I0814 08:11:39.963] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 08:11:39.963] has:nginx0-deployment
I0814 08:11:39.963] Successful
I0814 08:11:39.964] message:deployment.apps/nginx1-deployment 
I0814 08:11:39.964] REVISION  CHANGE-CAUSE
I0814 08:11:39.964] 1         <none>
I0814 08:11:39.965] 
I0814 08:11:39.965] deployment.apps/nginx0-deployment 
I0814 08:11:39.965] REVISION  CHANGE-CAUSE
I0814 08:11:39.965] 1         <none>
I0814 08:11:39.965] 
I0814 08:11:39.966] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 08:11:39.966] has:nginx1-deployment
I0814 08:11:39.967] Successful
I0814 08:11:39.967] message:deployment.apps/nginx1-deployment 
I0814 08:11:39.967] REVISION  CHANGE-CAUSE
I0814 08:11:39.967] 1         <none>
I0814 08:11:39.967] 
I0814 08:11:39.967] deployment.apps/nginx0-deployment 
I0814 08:11:39.967] REVISION  CHANGE-CAUSE
I0814 08:11:39.967] 1         <none>
I0814 08:11:39.967] 
I0814 08:11:39.968] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 08:11:39.968] has:Object 'Kind' is missing
I0814 08:11:40.037] deployment.apps "nginx1-deployment" force deleted
I0814 08:11:40.041] deployment.apps "nginx0-deployment" force deleted
W0814 08:11:40.142] E0814 08:11:39.384291   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:40.142] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 08:11:40.143] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0814 08:11:40.143] E0814 08:11:40.059803   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:40.159] E0814 08:11:40.159287   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:40.262] E0814 08:11:40.261674   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:40.386] E0814 08:11:40.385686   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:41.061] E0814 08:11:41.061092   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:41.161] E0814 08:11:41.160662   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 08:11:41.261] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:11:41.275] (Breplicationcontroller/busybox0 created
I0814 08:11:41.279] replicationcontroller/busybox1 created
W0814 08:11:41.380] E0814 08:11:41.262929   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:41.381] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 08:11:41.381] I0814 08:11:41.280022   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565770292-8284", Name:"busybox0", UID:"c0cd4758-a21d-40d8-9a5a-02157bbfa7c1", APIVersion:"v1", ResourceVersion:"1051", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-qz9fj
W0814 08:11:41.382] I0814 08:11:41.300665   53274 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565770292-8284", Name:"busybox1", UID:"a8623060-9c63-4503-b23e-b38d24eb1f06", APIVersion:"v1", ResourceVersion:"1052", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-tbqt4
W0814 08:11:41.387] E0814 08:11:41.386830   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 08:11:41.487] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 08:11:41.488] (BSuccessful
I0814 08:11:41.488] message:no rollbacker has been implemented for "ReplicationController"
I0814 08:11:41.488] no rollbacker has been implemented for "ReplicationController"
I0814 08:11:41.488] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 08:11:41.489] has:no rollbacker has been implemented for "ReplicationController"
I0814 08:11:41.489] Successful
I0814 08:11:41.489] message:no rollbacker has been implemented for "ReplicationController"
I0814 08:11:41.489] no rollbacker has been implemented for "ReplicationController"
I0814 08:11:41.489] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 08:11:41.489] has:Object 'Kind' is missing
I0814 08:11:41.561] Successful
I0814 08:11:41.562] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 08:11:41.562] error: replicationcontrollers "busybox0" pausing is not supported
I0814 08:11:41.562] error: replicationcontrollers "busybox1" pausing is not supported
I0814 08:11:41.563] has:Object 'Kind' is missing
I0814 08:11:41.563] Successful
I0814 08:11:41.564] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 08:11:41.564] error: replicationcontrollers "busybox0" pausing is not supported
I0814 08:11:41.564] error: replicationcontrollers "busybox1" pausing is not supported
I0814 08:11:41.564] has:replicationcontrollers "busybox0" pausing is not supported
I0814 08:11:41.565] Successful
I0814 08:11:41.565] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 08:11:41.565] error: replicationcontrollers "busybox0" pausing is not supported
I0814 08:11:41.566] error: replicationcontrollers "busybox1" pausing is not supported
I0814 08:11:41.566] has:replicationcontrollers "busybox1" pausing is not supported
I0814 08:11:41.650] Successful
I0814 08:11:41.650] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 08:11:41.651] error: replicationcontrollers "busybox0" resuming is not supported
I0814 08:11:41.651] error: replicationcontrollers "busybox1" resuming is not supported
I0814 08:11:41.651] has:Object 'Kind' is missing
I0814 08:11:41.652] Successful
I0814 08:11:41.652] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 08:11:41.653] error: replicationcontrollers "busybox0" resuming is not supported
I0814 08:11:41.653] error: replicationcontrollers "busybox1" resuming is not supported
I0814 08:11:41.653] has:replicationcontrollers "busybox0" resuming is not supported
I0814 08:11:41.655] Successful
I0814 08:11:41.655] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 08:11:41.655] error: replicationcontrollers "busybox0" resuming is not supported
I0814 08:11:41.656] error: replicationcontrollers "busybox1" resuming is not supported
I0814 08:11:41.656] has:replicationcontrollers "busybox0" resuming is not supported
I0814 08:11:41.730] replicationcontroller "busybox0" force deleted
I0814 08:11:41.737] replicationcontroller "busybox1" force deleted
W0814 08:11:41.838] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 08:11:41.839] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0814 08:11:42.063] E0814 08:11:42.062698   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:42.162] E0814 08:11:42.162181   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:42.265] E0814 08:11:42.264544   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:42.388] E0814 08:11:42.388213   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 08:11:42.744] Recording: run_namespace_tests
I0814 08:11:42.744] Running command: run_namespace_tests
I0814 08:11:42.762] 
I0814 08:11:42.764] +++ Running case: test-cmd.run_namespace_tests 
I0814 08:11:42.766] +++ working dir: /go/src/k8s.io/kubernetes
I0814 08:11:42.768] +++ command: run_namespace_tests
I0814 08:11:42.777] +++ [0814 08:11:42] Testing kubectl(v1:namespaces)
I0814 08:11:42.847] namespace/my-namespace created
I0814 08:11:42.932] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 08:11:43.007] (Bnamespace "my-namespace" deleted
W0814 08:11:43.108] E0814 08:11:43.064268   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:43.164] E0814 08:11:43.163902   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:43.266] E0814 08:11:43.266154   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:43.390] E0814 08:11:43.389631   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:44.066] E0814 08:11:44.065680   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:44.166] E0814 08:11:44.165468   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:44.268] E0814 08:11:44.267574   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:44.391] E0814 08:11:44.391184   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:45.067] E0814 08:11:45.067204   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:45.167] E0814 08:11:45.167102   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:45.270] E0814 08:11:45.269356   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:45.393] E0814 08:11:45.392897   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:45.598] I0814 08:11:45.597367   53274 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0814 08:11:45.698] I0814 08:11:45.697733   53274 controller_utils.go:1036] Caches are synced for resource quota controller
W0814 08:11:46.018] I0814 08:11:46.018157   53274 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0814 08:11:46.069] E0814 08:11:46.068730   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:46.119] I0814 08:11:46.118563   53274 controller_utils.go:1036] Caches are synced for garbage collector controller
W0814 08:11:46.169] E0814 08:11:46.168568   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:46.271] E0814 08:11:46.270970   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:46.395] E0814 08:11:46.394443   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:47.071] E0814 08:11:47.070392   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:47.170] E0814 08:11:47.170104   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:47.273] E0814 08:11:47.272437   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:47.396] E0814 08:11:47.395944   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:48.072] E0814 08:11:48.071425   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:48.172] E0814 08:11:48.171630   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 08:11:48.272] namespace/my-namespace condition met
I0814 08:11:48.273] Successful
I0814 08:11:48.273] message:Error from server (NotFound): namespaces "my-namespace" not found
I0814 08:11:48.274] has: not found
I0814 08:11:48.274] namespace/my-namespace created
I0814 08:11:48.333] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 08:11:48.529] (BSuccessful
I0814 08:11:48.529] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 08:11:48.530] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0814 08:11:48.534] namespace "namespace-1565770270-30781" deleted
I0814 08:11:48.534] namespace "namespace-1565770271-15540" deleted
I0814 08:11:48.534] namespace "namespace-1565770273-10377" deleted
I0814 08:11:48.534] namespace "namespace-1565770274-29407" deleted
I0814 08:11:48.534] namespace "namespace-1565770292-13725" deleted
I0814 08:11:48.534] namespace "namespace-1565770292-8284" deleted
I0814 08:11:48.535] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 08:11:48.535] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 08:11:48.535] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 08:11:48.535] has:warning: deleting cluster-scoped resources
I0814 08:11:48.535] Successful
I0814 08:11:48.535] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 08:11:48.535] namespace "kube-node-lease" deleted
I0814 08:11:48.535] namespace "my-namespace" deleted
I0814 08:11:48.536] namespace "namespace-1565770184-27926" deleted
... skipping 27 lines ...
I0814 08:11:48.540] namespace "namespace-1565770270-30781" deleted
I0814 08:11:48.540] namespace "namespace-1565770271-15540" deleted
I0814 08:11:48.540] namespace "namespace-1565770273-10377" deleted
I0814 08:11:48.540] namespace "namespace-1565770274-29407" deleted
I0814 08:11:48.540] namespace "namespace-1565770292-13725" deleted
I0814 08:11:48.540] namespace "namespace-1565770292-8284" deleted
I0814 08:11:48.541] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 08:11:48.541] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 08:11:48.541] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 08:11:48.541] has:namespace "my-namespace" deleted
I0814 08:11:48.630] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0814 08:11:48.700] (Bnamespace/other created
I0814 08:11:48.787] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0814 08:11:48.878] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:11:49.032] (Bpod/valid-pod created
I0814 08:11:49.136] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 08:11:49.223] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 08:11:49.302] (BSuccessful
I0814 08:11:49.302] message:error: a resource cannot be retrieved by name across all namespaces
I0814 08:11:49.303] has:a resource cannot be retrieved by name across all namespaces
I0814 08:11:49.389] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 08:11:49.464] (Bpod "valid-pod" force deleted
I0814 08:11:49.555] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:11:49.629] (Bnamespace "other" deleted
W0814 08:11:49.730] E0814 08:11:48.274450   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:49.730] E0814 08:11:48.397212   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:49.730] E0814 08:11:49.073828   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:49.731] E0814 08:11:49.172967   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:49.731] E0814 08:11:49.276011   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:49.731] E0814 08:11:49.398778   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:49.731] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 08:11:50.076] E0814 08:11:50.075669   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:50.175] E0814 08:11:50.174452   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:50.278] E0814 08:11:50.277420   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:50.400] E0814 08:11:50.400148   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:51.077] E0814 08:11:51.077237   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:51.176] E0814 08:11:51.175833   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:51.279] E0814 08:11:51.278851   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:51.402] E0814 08:11:51.401806   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:52.079] E0814 08:11:52.079013   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:52.177] E0814 08:11:52.177246   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:52.281] E0814 08:11:52.280353   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:52.320] I0814 08:11:52.320110   53274 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1565770292-8284
W0814 08:11:52.324] I0814 08:11:52.323818   53274 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1565770292-8284
W0814 08:11:52.404] E0814 08:11:52.403263   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:53.081] E0814 08:11:53.080448   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:53.179] E0814 08:11:53.178796   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:53.282] E0814 08:11:53.281827   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:53.405] E0814 08:11:53.404851   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:54.081] E0814 08:11:54.081266   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:54.180] E0814 08:11:54.180165   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:54.284] E0814 08:11:54.283275   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:54.407] E0814 08:11:54.406676   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 08:11:54.740] +++ exit code: 0
I0814 08:11:54.778] Recording: run_secrets_test
I0814 08:11:54.778] Running command: run_secrets_test
I0814 08:11:54.795] 
I0814 08:11:54.797] +++ Running case: test-cmd.run_secrets_test 
I0814 08:11:54.799] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 58 lines ...
I0814 08:11:56.612] (Bsecret "test-secret" deleted
I0814 08:11:56.687] secret/test-secret created
I0814 08:11:56.774] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 08:11:56.855] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0814 08:11:56.928] (Bsecret "test-secret" deleted
W0814 08:11:57.028] I0814 08:11:55.016292   70219 loader.go:375] Config loaded from file:  /tmp/tmp.JYO2Lm4LU5/.kube/config
W0814 08:11:57.029] E0814 08:11:55.082698   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:57.029] E0814 08:11:55.181483   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:57.030] E0814 08:11:55.284550   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:57.030] E0814 08:11:55.407898   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:57.031] E0814 08:11:56.084156   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:57.031] E0814 08:11:56.183244   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:57.031] E0814 08:11:56.285695   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:57.031] E0814 08:11:56.409256   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:57.086] E0814 08:11:57.085499   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:57.185] E0814 08:11:57.184679   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 08:11:57.285] secret/secret-string-data created
I0814 08:11:57.286] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 08:11:57.286] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 08:11:57.314] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0814 08:11:57.383] (Bsecret "secret-string-data" deleted
I0814 08:11:57.467] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 08:11:57.607] (Bsecret "test-secret" deleted
I0814 08:11:57.681] namespace "test-secrets" deleted
W0814 08:11:57.782] E0814 08:11:57.287630   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:57.783] E0814 08:11:57.410522   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:58.087] E0814 08:11:58.086944   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:58.187] E0814 08:11:58.186263   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:58.289] E0814 08:11:58.289000   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:58.412] E0814 08:11:58.411818   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:59.089] E0814 08:11:59.088324   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:59.188] E0814 08:11:59.187670   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:59.291] E0814 08:11:59.290435   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:11:59.413] E0814 08:11:59.413171   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:12:00.090] E0814 08:12:00.089828   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:12:00.189] E0814 08:12:00.189007   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:12:00.293] E0814 08:12:00.292426   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:12:00.415] E0814 08:12:00.414589   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:12:01.091] E0814 08:12:01.091169   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:12:01.191] E0814 08:12:01.190324   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:12:01.294] E0814 08:12:01.293926   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:12:01.416] E0814 08:12:01.416015   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:12:02.093] E0814 08:12:02.092498   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 08:12:02.192] E0814 08:12:02.191774   53274 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/in