This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2469 succeeded
Started2019-08-12 13:14
Elapsed26m53s
Revision
Buildergke-prow-ssd-pool-1a225945-m4r6
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/c5a8de70-5591-4d2c-8173-2f9ec30863a1/targets/test'}}
pod066ee433-bd03-11e9-8c5c-1a8ca1133b15
resultstorehttps://source.cloud.google.com/results/invocations/c5a8de70-5591-4d2c-8173-2f9ec30863a1/targets/test
infra-commit3d3631683
pod066ee433-bd03-11e9-8c5c-1a8ca1133b15
repok8s.io/kubernetes
repo-commit0610bf0c7ed73a8e8204cb870e20c724b24c0600
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/volumescheduling TestVolumeProvision 26s

go test -v k8s.io/kubernetes/test/integration/volumescheduling -run TestVolumeProvision$
=== RUN   TestVolumeProvision
W0812 13:40:14.733937  112807 feature_gate.go:208] Setting GA feature gate PersistentLocalVolumes=true. It will be removed in a future release.
I0812 13:40:14.733948  112807 feature_gate.go:216] feature gates: &{map[PersistentLocalVolumes:true]}
I0812 13:40:14.734896  112807 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0812 13:40:14.734942  112807 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0812 13:40:14.734957  112807 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0812 13:40:14.734967  112807 master.go:234] Using reconciler: 
I0812 13:40:14.736746  112807 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.736891  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.736979  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.737157  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.737441  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.738068  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.738304  112807 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0812 13:40:14.738336  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.738449  112807 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0812 13:40:14.738435  112807 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.738951  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.739058  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.739182  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.739329  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.740006  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.740276  112807 store.go:1342] Monitoring events count at <storage-prefix>//events
I0812 13:40:14.740425  112807 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.740606  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.740680  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.740750  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.740507  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.740811  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.740813  112807 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0812 13:40:14.740967  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.741516  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.741644  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.741780  112807 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0812 13:40:14.741838  112807 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0812 13:40:14.741834  112807 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.741980  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.741997  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.742039  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.742105  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.742189  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.742569  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.742858  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.743025  112807 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0812 13:40:14.743071  112807 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0812 13:40:14.743230  112807 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.744037  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.744756  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.744777  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.744828  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.744867  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.745784  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.745846  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.745980  112807 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0812 13:40:14.746047  112807 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0812 13:40:14.746107  112807 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.746160  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.746168  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.746191  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.746943  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.747019  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.747169  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.747646  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.747838  112807 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0812 13:40:14.747878  112807 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0812 13:40:14.747969  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.747982  112807 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.748204  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.748307  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.748516  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.748636  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.748771  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.749104  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.749219  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.749430  112807 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0812 13:40:14.749493  112807 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0812 13:40:14.749805  112807 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.750081  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.750172  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.750272  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.750375  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.750465  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.751214  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.751398  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.751866  112807 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0812 13:40:14.751988  112807 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0812 13:40:14.752858  112807 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.753246  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.753413  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.753635  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.753368  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.753884  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.755031  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.755077  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.755196  112807 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0812 13:40:14.755350  112807 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.755414  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.755422  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.755448  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.755481  112807 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0812 13:40:14.755654  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.756060  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.756189  112807 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0812 13:40:14.756296  112807 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.756365  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.756374  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.756417  112807 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0812 13:40:14.756426  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.756801  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.756909  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.756988  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.757515  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.758406  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.758746  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.758915  112807 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0812 13:40:14.758957  112807 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0812 13:40:14.759052  112807 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.759108  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.759121  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.759146  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.759257  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.760157  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.761085  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.761152  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.761231  112807 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0812 13:40:14.761307  112807 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0812 13:40:14.761380  112807 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.761445  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.761461  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.761484  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.762053  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.762312  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.762639  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.762871  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.763015  112807 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0812 13:40:14.763072  112807 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0812 13:40:14.763556  112807 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.763640  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.763648  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.763671  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.763751  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.764214  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.764231  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.764297  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.764446  112807 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0812 13:40:14.764554  112807 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0812 13:40:14.764510  112807 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.764837  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.764851  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.764958  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.764997  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.765212  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.765275  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.765290  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.765377  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.766331  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.766514  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.766563  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.766848  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.767080  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.767132  112807 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.767256  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.767264  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.767284  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.767320  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.767525  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.767777  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.767822  112807 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0812 13:40:14.767924  112807 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0812 13:40:14.768311  112807 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.768452  112807 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.769070  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.769811  112807 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.770514  112807 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.770992  112807 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.771470  112807 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.771776  112807 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.771863  112807 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.772068  112807 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.772420  112807 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.772812  112807 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.772967  112807 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.773485  112807 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.773765  112807 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.774109  112807 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.774263  112807 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.774659  112807 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.774821  112807 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.774908  112807 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.774979  112807 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.775194  112807 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.775298  112807 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.775410  112807 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.775944  112807 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.776138  112807 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.776657  112807 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.777202  112807 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.777378  112807 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.777532  112807 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.778002  112807 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.778280  112807 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.778800  112807 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.779385  112807 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.779834  112807 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.780439  112807 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.780739  112807 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.780842  112807 master.go:418] Skipping disabled API group "auditregistration.k8s.io".
I0812 13:40:14.780858  112807 master.go:426] Enabling API group "authentication.k8s.io".
I0812 13:40:14.780872  112807 master.go:426] Enabling API group "authorization.k8s.io".
I0812 13:40:14.780985  112807 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.781193  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.781214  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.781254  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.781337  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.782155  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.782278  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.782442  112807 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0812 13:40:14.782658  112807 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.782739  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.782753  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.782774  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.782517  112807 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0812 13:40:14.783000  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.784220  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.785265  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.785306  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.785478  112807 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0812 13:40:14.785584  112807 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0812 13:40:14.785841  112807 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.786180  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.786189  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.786261  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.786304  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.786770  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.787794  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.788023  112807 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0812 13:40:14.788048  112807 master.go:426] Enabling API group "autoscaling".
I0812 13:40:14.788077  112807 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0812 13:40:14.788180  112807 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.788237  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.788247  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.788271  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.788395  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.788489  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.789169  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.789301  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.789948  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.790422  112807 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0812 13:40:14.790523  112807 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0812 13:40:14.791045  112807 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.791160  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.791172  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.791215  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.791278  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.791430  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.792233  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.792432  112807 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0812 13:40:14.792467  112807 master.go:426] Enabling API group "batch".
I0812 13:40:14.792626  112807 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.792743  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.792764  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.792799  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.792836  112807 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0812 13:40:14.792904  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.793749  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.793774  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.793792  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.793895  112807 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0812 13:40:14.793914  112807 master.go:426] Enabling API group "certificates.k8s.io".
I0812 13:40:14.793963  112807 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0812 13:40:14.794033  112807 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.794092  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.794101  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.794125  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.794161  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.794742  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.794899  112807 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0812 13:40:14.794927  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.794927  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.794995  112807 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0812 13:40:14.795013  112807 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.795067  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.795074  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.795104  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.795189  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.795500  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.795587  112807 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0812 13:40:14.795600  112807 master.go:426] Enabling API group "coordination.k8s.io".
I0812 13:40:14.795765  112807 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.795811  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.795819  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.795858  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.795885  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.795895  112807 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0812 13:40:14.796022  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.796064  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.796369  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.796525  112807 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0812 13:40:14.796550  112807 master.go:426] Enabling API group "extensions".
I0812 13:40:14.796723  112807 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.796805  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.796817  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.796848  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.796867  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.796895  112807 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0812 13:40:14.796932  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.797073  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.797651  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.797678  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.797877  112807 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0812 13:40:14.798004  112807 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.798055  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.798065  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.798067  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.798089  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.798101  112807 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0812 13:40:14.798139  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.798579  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.798778  112807 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0812 13:40:14.798803  112807 master.go:426] Enabling API group "networking.k8s.io".
I0812 13:40:14.798842  112807 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.798907  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.799115  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.799148  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.799233  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.799266  112807 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0812 13:40:14.799462  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.799707  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.799796  112807 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0812 13:40:14.799808  112807 master.go:426] Enabling API group "node.k8s.io".
I0812 13:40:14.799879  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.799945  112807 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.799988  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.799996  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.800004  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.800028  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.800044  112807 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0812 13:40:14.800125  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.800987  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.801543  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.801859  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.801895  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.801992  112807 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0812 13:40:14.802110  112807 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.802161  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.802167  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.802190  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.802239  112807 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0812 13:40:14.802440  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.802930  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.803070  112807 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0812 13:40:14.803091  112807 master.go:426] Enabling API group "policy".
I0812 13:40:14.803140  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.803132  112807 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.803177  112807 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0812 13:40:14.803199  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.803208  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.803237  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.803386  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.803784  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.803880  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.803920  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.803973  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.804001  112807 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0812 13:40:14.804110  112807 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0812 13:40:14.804119  112807 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.804183  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.804197  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.804218  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.804265  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.804548  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.804560  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.804730  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.804742  112807 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0812 13:40:14.804762  112807 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0812 13:40:14.804781  112807 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.804844  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.804860  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.804880  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.804928  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.805275  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.805615  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.806241  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.806370  112807 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0812 13:40:14.806464  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.806572  112807 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0812 13:40:14.806578  112807 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.806656  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.806669  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.806725  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.806835  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.807040  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.807165  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.807213  112807 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0812 13:40:14.807325  112807 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.807241  112807 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0812 13:40:14.807385  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.807664  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.807751  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.807564  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.807839  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.808079  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.808174  112807 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0812 13:40:14.808283  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.808289  112807 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.808336  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.808343  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.808362  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.808400  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.808421  112807 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0812 13:40:14.808542  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.808794  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.808831  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.808861  112807 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0812 13:40:14.808882  112807 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.808920  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.808926  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.808946  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.808954  112807 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0812 13:40:14.808984  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.809272  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.809417  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.809464  112807 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0812 13:40:14.809841  112807 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.809896  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.809973  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.809984  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.809870  112807 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0812 13:40:14.810035  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.810109  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.810267  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.810622  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.810949  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.811064  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.811128  112807 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0812 13:40:14.811166  112807 master.go:426] Enabling API group "rbac.authorization.k8s.io".
I0812 13:40:14.811201  112807 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0812 13:40:14.813595  112807 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.813873  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.813906  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.813939  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.814062  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.814387  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.814424  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.814595  112807 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0812 13:40:14.814875  112807 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.814945  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.814956  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.815052  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.814639  112807 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0812 13:40:14.815316  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.815896  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.815951  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.816083  112807 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0812 13:40:14.816144  112807 master.go:426] Enabling API group "scheduling.k8s.io".
I0812 13:40:14.816144  112807 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0812 13:40:14.816084  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.816254  112807 master.go:418] Skipping disabled API group "settings.k8s.io".
I0812 13:40:14.815943  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.816416  112807 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.816523  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.816536  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.816569  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.816610  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.817255  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.817391  112807 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0812 13:40:14.817420  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.817522  112807 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.817538  112807 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0812 13:40:14.817586  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.817603  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.817643  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.817788  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.818074  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.818182  112807 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0812 13:40:14.818209  112807 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.818247  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.818263  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.818272  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.818324  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.818360  112807 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0812 13:40:14.818521  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.819307  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.819872  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.819907  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.820071  112807 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0812 13:40:14.820145  112807 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.820186  112807 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0812 13:40:14.820237  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.820350  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.820384  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.820475  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.820729  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.821245  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.821312  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.821367  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.827434  112807 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0812 13:40:14.827787  112807 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.827938  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.828145  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.827573  112807 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0812 13:40:14.828258  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.828386  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.828721  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.828924  112807 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0812 13:40:14.829024  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.829069  112807 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.829133  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.829140  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.829165  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.829452  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.829603  112807 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0812 13:40:14.829938  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.830173  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.830241  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.830349  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.830462  112807 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0812 13:40:14.830484  112807 master.go:426] Enabling API group "storage.k8s.io".
I0812 13:40:14.830516  112807 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0812 13:40:14.830624  112807 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.830682  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.830719  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.830744  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.830883  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.831187  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.831406  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.831863  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.831497  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.832828  112807 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0812 13:40:14.832940  112807 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0812 13:40:14.833640  112807 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.834044  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.834225  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.834323  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.833916  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.834533  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.835202  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.835331  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.835577  112807 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0812 13:40:14.835680  112807 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0812 13:40:14.835851  112807 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.835942  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.835955  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.835996  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.836064  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.836442  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.836742  112807 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0812 13:40:14.836959  112807 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.837134  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.837183  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.837228  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.837298  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.837359  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.837520  112807 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0812 13:40:14.837735  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.838080  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.838182  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.838329  112807 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0812 13:40:14.838384  112807 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0812 13:40:14.838530  112807 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.838614  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.838622  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.838706  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.838750  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.838808  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.839160  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.839327  112807 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0812 13:40:14.839363  112807 master.go:426] Enabling API group "apps".
I0812 13:40:14.839407  112807 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.839496  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.839544  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.839590  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.839651  112807 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0812 13:40:14.839911  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.839659  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.840052  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.840362  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.840401  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.840549  112807 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0812 13:40:14.840592  112807 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0812 13:40:14.840603  112807 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.840754  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.840781  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.840821  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.840925  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.841575  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.841838  112807 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0812 13:40:14.841843  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.841909  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.841892  112807 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.842018  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.842032  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.842060  112807 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0812 13:40:14.842035  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.842111  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.842253  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.842755  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.842818  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.842927  112807 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0812 13:40:14.842967  112807 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0812 13:40:14.842966  112807 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.843049  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.843061  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.843095  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.843144  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.843205  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.843445  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.843533  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.843545  112807 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0812 13:40:14.844066  112807 master.go:426] Enabling API group "admissionregistration.k8s.io".
I0812 13:40:14.844133  112807 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.843560  112807 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0812 13:40:14.843706  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.844445  112807 client.go:354] parsed scheme: ""
I0812 13:40:14.844835  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:14.844988  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:14.845207  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.845429  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.845853  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:14.846003  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:14.846349  112807 store.go:1342] Monitoring events count at <storage-prefix>//events
I0812 13:40:14.846388  112807 master.go:426] Enabling API group "events.k8s.io".
I0812 13:40:14.846608  112807 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0812 13:40:14.846669  112807 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.847305  112807 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.847765  112807 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.847882  112807 watch_cache.go:405] Replace watchCache (rev: 56522) 
I0812 13:40:14.848140  112807 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.848525  112807 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.848804  112807 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.849196  112807 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.849509  112807 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.849728  112807 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.850024  112807 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.850945  112807 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.851286  112807 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.852060  112807 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.852371  112807 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.853073  112807 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.853481  112807 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.854468  112807 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.854904  112807 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.855573  112807 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.855971  112807 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0812 13:40:14.856161  112807 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0812 13:40:14.856802  112807 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.857025  112807 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.857596  112807 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.858424  112807 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.859092  112807 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.859850  112807 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.860313  112807 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.861208  112807 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.861883  112807 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.862189  112807 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.862925  112807 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0812 13:40:14.863092  112807 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0812 13:40:14.863837  112807 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.864190  112807 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.864730  112807 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.865346  112807 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.865858  112807 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.866424  112807 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.867054  112807 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.867608  112807 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.868095  112807 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.868827  112807 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.869506  112807 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0812 13:40:14.869681  112807 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0812 13:40:14.870280  112807 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.870901  112807 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0812 13:40:14.871097  112807 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0812 13:40:14.871658  112807 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.872279  112807 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.872579  112807 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.873113  112807 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.873622  112807 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.874134  112807 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.874727  112807 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0812 13:40:14.874904  112807 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0812 13:40:14.875624  112807 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.876251  112807 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.876572  112807 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.877235  112807 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.877522  112807 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.877911  112807 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.878566  112807 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.878871  112807 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.879192  112807 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.879876  112807 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.880352  112807 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.880651  112807 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0812 13:40:14.880872  112807 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0812 13:40:14.880974  112807 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0812 13:40:14.881585  112807 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.882264  112807 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.882913  112807 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.883477  112807 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.884187  112807 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"11762e59-1cdd-457b-89ef-c5f604a5ded0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0812 13:40:14.886650  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:14.886769  112807 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0812 13:40:14.886784  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:14.886799  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:14.886810  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:14.886818  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:14.886853  112807 httplog.go:90] GET /healthz: (360.99µs) 0 [Go-http-client/1.1 127.0.0.1:42692]
I0812 13:40:14.889299  112807 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.963955ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42694]
I0812 13:40:14.894061  112807 httplog.go:90] GET /api/v1/services: (2.10794ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42694]
I0812 13:40:14.900223  112807 httplog.go:90] GET /api/v1/services: (1.880101ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42694]
I0812 13:40:14.903617  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:14.903671  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:14.903709  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:14.903718  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:14.903726  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:14.903751  112807 httplog.go:90] GET /healthz: (271.257µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:14.904522  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.111226ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42694]
I0812 13:40:14.905492  112807 httplog.go:90] GET /api/v1/services: (993.91µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42696]
I0812 13:40:14.905517  112807 httplog.go:90] GET /api/v1/services: (1.313809ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:14.906971  112807 httplog.go:90] POST /api/v1/namespaces: (1.773055ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42694]
I0812 13:40:14.908376  112807 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.038466ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:14.910509  112807 httplog.go:90] POST /api/v1/namespaces: (1.535683ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:14.912117  112807 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.049922ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:14.914767  112807 httplog.go:90] POST /api/v1/namespaces: (1.861466ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:14.988029  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:14.988085  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:14.988097  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:14.988104  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:14.988111  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:14.988143  112807 httplog.go:90] GET /healthz: (282.821µs) 0 [Go-http-client/1.1 127.0.0.1:42692]
I0812 13:40:15.004723  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.004769  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.004780  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.004788  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.004794  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.004830  112807 httplog.go:90] GET /healthz: (327.525µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.088201  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.088533  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.088651  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.088771  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.088843  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.089055  112807 httplog.go:90] GET /healthz: (1.061252ms) 0 [Go-http-client/1.1 127.0.0.1:42692]
I0812 13:40:15.104813  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.105132  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.105217  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.105298  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.105370  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.105766  112807 httplog.go:90] GET /healthz: (1.122455ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.188148  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.188205  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.188220  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.188231  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.188240  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.188280  112807 httplog.go:90] GET /healthz: (330.568µs) 0 [Go-http-client/1.1 127.0.0.1:42692]
I0812 13:40:15.204564  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.204603  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.204613  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.204621  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.204627  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.204650  112807 httplog.go:90] GET /healthz: (223.721µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.288169  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.288211  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.288221  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.288229  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.288234  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.288292  112807 httplog.go:90] GET /healthz: (293.936µs) 0 [Go-http-client/1.1 127.0.0.1:42692]
I0812 13:40:15.304807  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.304858  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.304868  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.304876  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.304884  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.304930  112807 httplog.go:90] GET /healthz: (324.422µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.387976  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.388039  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.388050  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.388057  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.388063  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.388092  112807 httplog.go:90] GET /healthz: (272.002µs) 0 [Go-http-client/1.1 127.0.0.1:42692]
I0812 13:40:15.404787  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.404837  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.404848  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.404908  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.404914  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.404978  112807 httplog.go:90] GET /healthz: (360.597µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.488067  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.488117  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.488131  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.488142  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.488168  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.488223  112807 httplog.go:90] GET /healthz: (302.156µs) 0 [Go-http-client/1.1 127.0.0.1:42692]
I0812 13:40:15.504677  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.504736  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.504748  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.504756  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.504762  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.504793  112807 httplog.go:90] GET /healthz: (292.74µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.588136  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.588206  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.588218  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.588225  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.588233  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.588302  112807 httplog.go:90] GET /healthz: (332.265µs) 0 [Go-http-client/1.1 127.0.0.1:42692]
I0812 13:40:15.604898  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.604953  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.604971  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.604982  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.604992  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.605290  112807 httplog.go:90] GET /healthz: (657.341µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.688758  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.688806  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.688819  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.688827  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.688833  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.688864  112807 httplog.go:90] GET /healthz: (316.163µs) 0 [Go-http-client/1.1 127.0.0.1:42692]
I0812 13:40:15.704704  112807 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0812 13:40:15.704752  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.704763  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.704771  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.704777  112807 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.704819  112807 httplog.go:90] GET /healthz: (346.779µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.735205  112807 client.go:354] parsed scheme: ""
I0812 13:40:15.735257  112807 client.go:354] scheme "" not registered, fallback to default scheme
I0812 13:40:15.735303  112807 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0812 13:40:15.735427  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:15.735916  112807 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0812 13:40:15.736001  112807 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0812 13:40:15.789888  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.789950  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.789967  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.789976  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.790034  112807 httplog.go:90] GET /healthz: (1.882045ms) 0 [Go-http-client/1.1 127.0.0.1:42692]
I0812 13:40:15.806336  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.806383  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.806396  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.806405  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.806469  112807 httplog.go:90] GET /healthz: (1.878242ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.888809  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.606682ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42708]
I0812 13:40:15.888884  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.952085ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.888923  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.888811  112807 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.885917ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42696]
I0812 13:40:15.888946  112807 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0812 13:40:15.888954  112807 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0812 13:40:15.888960  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0812 13:40:15.888984  112807 httplog.go:90] GET /healthz: (1.009199ms) 0 [Go-http-client/1.1 127.0.0.1:42710]
I0812 13:40:15.890979  112807 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.619672ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.891093  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.694365ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:15.891659  112807 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.331682ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.891905  112807 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0812 13:40:15.892312  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (871.174µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.892873  112807 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (830.007µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.893374  112807 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.923235ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:15.893822  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (799.726µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.894462  112807 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.08921ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.894806  112807 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0812 13:40:15.894821  112807 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0812 13:40:15.895210  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (808.19µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0812 13:40:15.896417  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (660.703µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.897882  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (789.198µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.898915  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (746.984µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.900728  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (916.953µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.902282  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (986.628µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.904695  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.838125ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.905165  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0812 13:40:15.905358  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.905394  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:15.905428  112807 httplog.go:90] GET /healthz: (1.008006ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:15.906486  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.080941ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.908921  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.781804ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.909272  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0812 13:40:15.910368  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (875.135µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.912807  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.903151ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.913409  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0812 13:40:15.915033  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.274216ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.917060  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.565476ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.917384  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0812 13:40:15.919060  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.121723ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.921445  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.885637ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.921862  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0812 13:40:15.923256  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.159097ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.925488  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.707385ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.925725  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0812 13:40:15.927252  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.282242ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.929526  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.738695ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.929787  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0812 13:40:15.931327  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.290043ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.933445  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.677519ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.933911  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0812 13:40:15.935224  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.021315ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.937826  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.030128ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.938178  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0812 13:40:15.940063  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.618533ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.943357  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.593896ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.943764  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0812 13:40:15.945455  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.411412ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.948143  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.03754ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.948494  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0812 13:40:15.949799  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (945.688µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.952064  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.754649ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.952387  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0812 13:40:15.953559  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (943.941µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.955759  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.686802ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.956026  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0812 13:40:15.957872  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.532885ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.960440  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.017196ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.960996  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0812 13:40:15.962493  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.224626ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.965195  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.898446ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.965533  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0812 13:40:15.967414  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.503849ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.970597  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.375553ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.971088  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0812 13:40:15.972968  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.442985ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.975379  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.727713ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.975660  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0812 13:40:15.977362  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.340793ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.979989  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.906223ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.980339  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0812 13:40:15.982002  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.304204ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.985374  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.353588ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.985646  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0812 13:40:15.987253  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.326516ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:15.988531  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:15.988565  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:15.988604  112807 httplog.go:90] GET /healthz: (993.262µs) 0 [Go-http-client/1.1 127.0.0.1:42712]
I0812 13:40:15.989566  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.689482ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:15.989887  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0812 13:40:15.991190  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.033489ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:15.993150  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.46305ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:15.993789  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0812 13:40:15.995276  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.157928ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:15.997325  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.572786ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:15.997672  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0812 13:40:15.999111  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.039142ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.001416  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.788903ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.001759  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0812 13:40:16.003083  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.029799ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.005386  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.005417  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.005462  112807 httplog.go:90] GET /healthz: (1.09009ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.006777  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.954058ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.007084  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0812 13:40:16.008394  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (929.575µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.010950  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.008111ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.011247  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0812 13:40:16.012602  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.03123ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.015349  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.109169ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.015833  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0812 13:40:16.017130  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.088163ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.019655  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.952521ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.020002  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0812 13:40:16.021514  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.174431ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.024027  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.947644ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.024318  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0812 13:40:16.025528  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.010676ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.027602  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.623763ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.027997  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0812 13:40:16.030412  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (2.193448ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.033272  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.107577ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.033730  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0812 13:40:16.035494  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.25652ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.038444  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.309198ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.038814  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0812 13:40:16.040280  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.181392ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.043211  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.421036ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.043514  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0812 13:40:16.044950  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.131219ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.047335  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.840373ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.047607  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0812 13:40:16.049175  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.250653ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.051593  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.824923ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.052098  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0812 13:40:16.053722  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.402872ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.056013  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.829267ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.056338  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0812 13:40:16.057723  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.104575ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.059925  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.703114ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.060338  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0812 13:40:16.061996  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.388858ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.064342  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.743463ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.064791  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0812 13:40:16.066288  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.22667ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.068769  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.841784ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.069097  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0812 13:40:16.070700  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.322673ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.073316  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.135696ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.073628  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0812 13:40:16.075413  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.401372ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.077940  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.913475ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.078223  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0812 13:40:16.080121  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.634991ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.082780  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.967796ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.082996  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0812 13:40:16.084304  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.086865ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.086776  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.020918ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.087049  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0812 13:40:16.088476  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.207691ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.088654  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.088870  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.088913  112807 httplog.go:90] GET /healthz: (1.286516ms) 0 [Go-http-client/1.1 127.0.0.1:42712]
I0812 13:40:16.090871  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.855097ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.091129  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0812 13:40:16.092446  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.063185ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.094784  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.927775ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.095134  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0812 13:40:16.096916  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.431206ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.099539  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.990312ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.100259  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0812 13:40:16.101735  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.216502ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.104208  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.90638ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.104463  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0812 13:40:16.105656  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.106132  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.106261  112807 httplog.go:90] GET /healthz: (1.703939ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.106020  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.344804ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.108717  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.804057ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.109076  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0812 13:40:16.110506  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.175275ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.112778  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.639442ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.113062  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0812 13:40:16.114388  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (991.174µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.117108  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.100133ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.117517  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0812 13:40:16.118993  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.161622ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.121004  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.514936ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.121276  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0812 13:40:16.122534  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (981.232µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.125509  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.431835ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.125862  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0812 13:40:16.128814  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.804534ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.150408  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.120296ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.150772  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0812 13:40:16.169454  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (2.054942ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.189296  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.189347  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.189397  112807 httplog.go:90] GET /healthz: (1.555967ms) 0 [Go-http-client/1.1 127.0.0.1:42710]
I0812 13:40:16.190274  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.91277ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.190491  112807 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0812 13:40:16.206138  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.206195  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.206251  112807 httplog.go:90] GET /healthz: (1.697217ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.208250  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.334386ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.230268  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.736568ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.230818  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0812 13:40:16.248921  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.588024ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.272247  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.471296ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.272992  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0812 13:40:16.289498  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (2.222437ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.290058  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.290188  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.290469  112807 httplog.go:90] GET /healthz: (2.400394ms) 0 [Go-http-client/1.1 127.0.0.1:42710]
I0812 13:40:16.307082  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.307355  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.307721  112807 httplog.go:90] GET /healthz: (3.100546ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.311670  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.472007ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.312436  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0812 13:40:16.329097  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.904117ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.350589  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.414832ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.350936  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0812 13:40:16.368999  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.807735ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.389930  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.390005  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.390109  112807 httplog.go:90] GET /healthz: (2.287704ms) 0 [Go-http-client/1.1 127.0.0.1:42710]
I0812 13:40:16.390541  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.193724ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.390872  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0812 13:40:16.406093  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.406296  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.406494  112807 httplog.go:90] GET /healthz: (1.966038ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.408424  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.364358ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.430031  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.814545ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.430328  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0812 13:40:16.449384  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (2.192601ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.470336  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.090887ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.470679  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0812 13:40:16.488877  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.488912  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.488940  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.684209ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.488943  112807 httplog.go:90] GET /healthz: (1.042644ms) 0 [Go-http-client/1.1 127.0.0.1:42712]
I0812 13:40:16.506211  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.506253  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.506308  112807 httplog.go:90] GET /healthz: (1.628469ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.509606  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.693261ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.509998  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0812 13:40:16.529281  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (2.058682ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.550223  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.955388ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.550717  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0812 13:40:16.569433  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (2.082076ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.589804  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.589850  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.589906  112807 httplog.go:90] GET /healthz: (2.040827ms) 0 [Go-http-client/1.1 127.0.0.1:42710]
I0812 13:40:16.590021  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.880179ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.590270  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0812 13:40:16.610122  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.610169  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.610222  112807 httplog.go:90] GET /healthz: (5.650594ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.611954  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (2.151712ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.629899  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.764806ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.630306  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0812 13:40:16.648897  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.599939ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.669605  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.414175ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.669948  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0812 13:40:16.688950  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.825399ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.689209  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.689234  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.689264  112807 httplog.go:90] GET /healthz: (1.27201ms) 0 [Go-http-client/1.1 127.0.0.1:42712]
I0812 13:40:16.707146  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.707183  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.707237  112807 httplog.go:90] GET /healthz: (2.742445ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:16.709433  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.251599ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.709870  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0812 13:40:16.729289  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.895534ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.749730  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.621211ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.750035  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0812 13:40:16.769145  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.885314ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.789817  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.789862  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.789917  112807 httplog.go:90] GET /healthz: (2.098355ms) 0 [Go-http-client/1.1 127.0.0.1:42712]
I0812 13:40:16.790104  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.841916ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.790319  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0812 13:40:16.806350  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.806423  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.806514  112807 httplog.go:90] GET /healthz: (1.719573ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.808521  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.484381ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.830213  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.020189ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.830507  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0812 13:40:16.848786  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.599798ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.869809  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.705255ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.870453  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0812 13:40:16.888762  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.888797  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.888856  112807 httplog.go:90] GET /healthz: (1.209193ms) 0 [Go-http-client/1.1 127.0.0.1:42712]
I0812 13:40:16.889021  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.853741ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.906212  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.906258  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.906319  112807 httplog.go:90] GET /healthz: (1.721217ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.909122  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.145826ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.909591  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0812 13:40:16.929011  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.721883ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.949934  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.769562ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.950618  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0812 13:40:16.968970  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.826478ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.989586  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:16.989642  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:16.989807  112807 httplog.go:90] GET /healthz: (1.898605ms) 0 [Go-http-client/1.1 127.0.0.1:42712]
I0812 13:40:16.990887  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.356112ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:16.991398  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0812 13:40:17.006120  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.006443  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.006757  112807 httplog.go:90] GET /healthz: (2.288773ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.009362  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (2.211516ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.030968  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.713579ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.031522  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0812 13:40:17.049006  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.78612ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.069671  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.473211ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.070064  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0812 13:40:17.089145  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.089208  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.089247  112807 httplog.go:90] GET /healthz: (1.364576ms) 0 [Go-http-client/1.1 127.0.0.1:42710]
I0812 13:40:17.089253  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.141722ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.106095  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.106150  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.106195  112807 httplog.go:90] GET /healthz: (1.633824ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.109194  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.227744ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.109769  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0812 13:40:17.129252  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.934641ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.149774  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.521907ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.150228  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0812 13:40:17.168926  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.680476ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.189330  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.189374  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.189421  112807 httplog.go:90] GET /healthz: (1.591072ms) 0 [Go-http-client/1.1 127.0.0.1:42712]
I0812 13:40:17.189893  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.577455ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.190192  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0812 13:40:17.206214  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.206256  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.206300  112807 httplog.go:90] GET /healthz: (1.646206ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.208326  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.34145ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.229767  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.602815ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.230094  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0812 13:40:17.249264  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.93065ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.269793  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.365161ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.270294  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0812 13:40:17.288892  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.698509ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.289781  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.289845  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.289889  112807 httplog.go:90] GET /healthz: (1.296115ms) 0 [Go-http-client/1.1 127.0.0.1:42712]
I0812 13:40:17.308119  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.308161  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.308226  112807 httplog.go:90] GET /healthz: (3.71364ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.311076  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.911378ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.312542  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0812 13:40:17.329430  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (2.234693ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.349948  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.747129ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.350285  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0812 13:40:17.369000  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.739072ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.389262  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.389308  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.389352  112807 httplog.go:90] GET /healthz: (1.584522ms) 0 [Go-http-client/1.1 127.0.0.1:42712]
I0812 13:40:17.389931  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.60287ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.390240  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0812 13:40:17.406205  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.406406  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.406629  112807 httplog.go:90] GET /healthz: (1.855193ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.408298  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.248967ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.429806  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.607275ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.430112  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0812 13:40:17.448959  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.847452ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.469607  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.480653ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.469914  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0812 13:40:17.488887  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.629001ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.490098  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.490296  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.490514  112807 httplog.go:90] GET /healthz: (1.357134ms) 0 [Go-http-client/1.1 127.0.0.1:42710]
I0812 13:40:17.505990  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.506345  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.506666  112807 httplog.go:90] GET /healthz: (2.057968ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.509149  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.29254ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.509521  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0812 13:40:17.528920  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.768797ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.550002  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.733601ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.550598  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0812 13:40:17.569100  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (2.052259ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.589710  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.399907ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.589977  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.590008  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.590038  112807 httplog.go:90] GET /healthz: (2.242902ms) 0 [Go-http-client/1.1 127.0.0.1:42710]
I0812 13:40:17.590037  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0812 13:40:17.606288  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.606343  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.606396  112807 httplog.go:90] GET /healthz: (1.613001ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.608531  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.542645ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.630049  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.782233ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.630402  112807 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0812 13:40:17.649442  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (2.247701ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.652053  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.641045ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.669987  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.928848ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.670757  112807 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0812 13:40:17.689506  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.689930  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.690177  112807 httplog.go:90] GET /healthz: (2.317667ms) 0 [Go-http-client/1.1 127.0.0.1:42712]
I0812 13:40:17.689858  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (2.585399ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.692640  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.723662ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.706473  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.706551  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.706668  112807 httplog.go:90] GET /healthz: (1.975436ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.709529  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.528022ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.710135  112807 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0812 13:40:17.729603  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (2.207232ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.731934  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.561075ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.749731  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.466893ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.750396  112807 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0812 13:40:17.768969  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.824158ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.771080  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.528928ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.789259  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.789315  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.789782  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.593334ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:17.790387  112807 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0812 13:40:17.790711  112807 httplog.go:90] GET /healthz: (2.897816ms) 0 [Go-http-client/1.1 127.0.0.1:42712]
I0812 13:40:17.805914  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.806134  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.806313  112807 httplog.go:90] GET /healthz: (1.838247ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.808916  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.955195ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.811680  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.731799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.830259  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.014265ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.830638  112807 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0812 13:40:17.848966  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.871036ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.851743  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.848218ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.870194  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.907912ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.870951  112807 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0812 13:40:17.889268  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.940913ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.889856  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.890035  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.890253  112807 httplog.go:90] GET /healthz: (2.269145ms) 0 [Go-http-client/1.1 127.0.0.1:42710]
I0812 13:40:17.892074  112807 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.804885ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.906266  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.906332  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.906401  112807 httplog.go:90] GET /healthz: (1.756881ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.909420  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.471566ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.909669  112807 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0812 13:40:17.929101  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.732471ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.931559  112807 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.88027ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.950475  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.963723ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.950846  112807 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0812 13:40:17.968912  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.730313ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.970925  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.527211ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.989276  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.168464ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:17.989644  112807 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0812 13:40:17.993586  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:17.993644  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:17.993743  112807 httplog.go:90] GET /healthz: (1.572837ms) 0 [Go-http-client/1.1 127.0.0.1:42712]
I0812 13:40:18.006126  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:18.006187  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:18.006242  112807 httplog.go:90] GET /healthz: (1.812592ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:18.008678  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.472183ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:18.010947  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.711451ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:18.029620  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.440298ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:18.030006  112807 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0812 13:40:18.049336  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.95122ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:18.051662  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.577588ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:18.070537  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.205954ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:18.071110  112807 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0812 13:40:18.088911  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.732499ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:18.089304  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:18.089339  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:18.089387  112807 httplog.go:90] GET /healthz: (1.431432ms) 0 [Go-http-client/1.1 127.0.0.1:42710]
I0812 13:40:18.091323  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.528698ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:18.106347  112807 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0812 13:40:18.106389  112807 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0812 13:40:18.106489  112807 httplog.go:90] GET /healthz: (1.696197ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:18.109407  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.325116ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:18.109759  112807 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0812 13:40:18.128987  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.699568ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:18.131417  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.636646ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:18.149983  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.667039ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:18.150336  112807 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0812 13:40:18.169064  112807 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.85829ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:18.171624  112807 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.677445ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:18.189170  112807 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.08251ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:18.189569  112807 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0812 13:40:18.190201  112807 httplog.go:90] GET /healthz: (1.888617ms) 200 [Go-http-client/1.1 127.0.0.1:42712]
W0812 13:40:18.190884  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:18.190914  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:18.190933  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:18.190943  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:18.190956  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:18.190964  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:18.190974  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:18.190984  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:18.190993  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:18.191040  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:18.191064  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0812 13:40:18.191098  112807 factory.go:299] Creating scheduler from algorithm provider 'DefaultProvider'
I0812 13:40:18.191121  112807 factory.go:387] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0812 13:40:18.191764  112807 reflector.go:122] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.191790  112807 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.191776  112807 reflector.go:122] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.191901  112807 reflector.go:160] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.191911  112807 reflector.go:122] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.191940  112807 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.192132  112807 reflector.go:122] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.192145  112807 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.192318  112807 reflector.go:122] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.192331  112807 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.192560  112807 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (553.127µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:18.192807  112807 reflector.go:122] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.192826  112807 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.192869  112807 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (354.264µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:18.193182  112807 reflector.go:122] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.193198  112807 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.193303  112807 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (288.45µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:18.193635  112807 get.go:250] Starting watch for /api/v1/services, rv=56522 labels= fields= timeout=5m32s
I0812 13:40:18.191923  112807 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.193675  112807 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.193635  112807 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.193747  112807 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.193979  112807 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (895.074µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42744]
I0812 13:40:18.194214  112807 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (347.211µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42752]
I0812 13:40:18.194242  112807 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=56522 labels= fields= timeout=7m24s
I0812 13:40:18.194584  112807 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (273.966µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42748]
I0812 13:40:18.194738  112807 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (639.86µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42742]
I0812 13:40:18.194799  112807 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (402.027µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42758]
I0812 13:40:18.194860  112807 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=56522 labels= fields= timeout=7m43s
I0812 13:40:18.194886  112807 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (318.299µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42756]
I0812 13:40:18.195447  112807 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=56522 labels= fields= timeout=9m13s
I0812 13:40:18.195610  112807 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=56522 labels= fields= timeout=7m24s
I0812 13:40:18.195634  112807 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=56522 labels= fields= timeout=9m24s
I0812 13:40:18.196097  112807 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=56522 labels= fields= timeout=7m17s
I0812 13:40:18.196297  112807 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=56522 labels= fields= timeout=7m59s
I0812 13:40:18.196422  112807 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=56522 labels= fields= timeout=7m1s
I0812 13:40:18.196430  112807 reflector.go:122] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.196483  112807 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.196734  112807 reflector.go:122] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.196756  112807 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0812 13:40:18.197434  112807 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (555.521µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42764]
I0812 13:40:18.197449  112807 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (417.644µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0812 13:40:18.198391  112807 get.go:250] Starting watch for /api/v1/pods, rv=56522 labels= fields= timeout=5m53s
I0812 13:40:18.198454  112807 get.go:250] Starting watch for /api/v1/nodes, rv=56522 labels= fields= timeout=6m35s
I0812 13:40:18.216655  112807 httplog.go:90] GET /healthz: (11.822637ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42740]
I0812 13:40:18.222288  112807 httplog.go:90] GET /api/v1/namespaces/default: (3.264533ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42740]
I0812 13:40:18.228267  112807 httplog.go:90] POST /api/v1/namespaces: (4.971613ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42740]
I0812 13:40:18.231452  112807 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.241157ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42740]
I0812 13:40:18.240002  112807 httplog.go:90] POST /api/v1/namespaces/default/services: (7.594398ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42740]
I0812 13:40:18.242598  112807 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.803766ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42740]
I0812 13:40:18.246581  112807 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (3.162724ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42740]
I0812 13:40:18.298352  112807 shared_informer.go:177] caches populated
I0812 13:40:18.398858  112807 shared_informer.go:177] caches populated
I0812 13:40:18.499119  112807 shared_informer.go:177] caches populated
I0812 13:40:18.599650  112807 shared_informer.go:177] caches populated
I0812 13:40:18.699986  112807 shared_informer.go:177] caches populated
I0812 13:40:18.800552  112807 shared_informer.go:177] caches populated
I0812 13:40:18.900983  112807 shared_informer.go:177] caches populated
I0812 13:40:19.001273  112807 shared_informer.go:177] caches populated
I0812 13:40:19.101538  112807 shared_informer.go:177] caches populated
I0812 13:40:19.201903  112807 shared_informer.go:177] caches populated
I0812 13:40:19.302203  112807 shared_informer.go:177] caches populated
I0812 13:40:19.402542  112807 shared_informer.go:177] caches populated
I0812 13:40:19.402948  112807 plugins.go:629] Loaded volume plugin "kubernetes.io/mock-provisioner"
W0812 13:40:19.403000  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:19.403044  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:19.403070  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:19.403093  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0812 13:40:19.403110  112807 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0812 13:40:19.403161  112807 pv_controller_base.go:282] Starting persistent volume controller
I0812 13:40:19.403191  112807 controller_utils.go:1029] Waiting for caches to sync for persistent volume controller
I0812 13:40:19.403401  112807 reflector.go:122] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:19.403433  112807 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0812 13:40:19.403520  112807 reflector.go:122] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:19.403538  112807 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0812 13:40:19.403545  112807 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:19.403557  112807 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0812 13:40:19.403860  112807 reflector.go:122] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:19.403872  112807 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0812 13:40:19.404082  112807 reflector.go:122] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:133
I0812 13:40:19.404107  112807 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0812 13:40:19.404996  112807 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (858.93µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42740]
I0812 13:40:19.405037  112807 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (538.319µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42910]
I0812 13:40:19.404996  112807 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (743.169µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42908]
I0812 13:40:19.405178  112807 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (829.627µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42902]
I0812 13:40:19.405458  112807 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (1.187962ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42904]
I0812 13:40:19.405581  112807 get.go:250] Starting watch for /api/v1/nodes, rv=56522 labels= fields= timeout=5m23s
I0812 13:40:19.405590  112807 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=56522 labels= fields= timeout=8m28s
I0812 13:40:19.406070  112807 get.go:250] Starting watch for /api/v1/pods, rv=56522 labels= fields= timeout=6m12s
I0812 13:40:19.406197  112807 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=56522 labels= fields= timeout=8m41s
I0812 13:40:19.406736  112807 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=56522 labels= fields= timeout=6m6s
I0812 13:40:19.503400  112807 shared_informer.go:177] caches populated
I0812 13:40:19.503645  112807 controller_utils.go:1036] Caches are synced for persistent volume controller
I0812 13:40:19.503797  112807 pv_controller_base.go:158] controller initialized
I0812 13:40:19.503400  112807 shared_informer.go:177] caches populated
I0812 13:40:19.503984  112807 pv_controller_base.go:419] resyncing PV controller
I0812 13:40:19.604409  112807 shared_informer.go:177] caches populated
I0812 13:40:19.704698  112807 shared_informer.go:177] caches populated
I0812 13:40:19.805082  112807 shared_informer.go:177] caches populated
I0812 13:40:19.905392  112807 shared_informer.go:177] caches populated
I0812 13:40:19.909989  112807 httplog.go:90] POST /api/v1/nodes: (3.577658ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:19.910018  112807 node_tree.go:93] Added node "node-1" in group "" to NodeTree
I0812 13:40:19.912469  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.858963ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:19.914776  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.668778ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:19.915156  112807 volume_binding_test.go:751] Running test topolgy unsatisfied
I0812 13:40:19.916894  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.421765ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:19.918632  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.309865ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:19.920311  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.279211ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:19.923071  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (1.82676ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:19.923342  112807 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-topomismatch", version 56685
I0812 13:40:19.923364  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-topomismatch]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:19.923401  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-topomismatch]: no volume found
I0812 13:40:19.923442  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-topomismatch] status: set phase Pending
I0812 13:40:19.923453  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-topomismatch] status: phase Pending already set
I0812 13:40:19.923576  112807 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9", Name:"pvc-topomismatch", UID:"138a7e3a-68eb-4cfc-970e-719c9855beb5", APIVersion:"v1", ResourceVersion:"56685", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0812 13:40:19.925230  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods: (1.762525ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:19.925898  112807 scheduling_queue.go:830] About to try and schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch
I0812 13:40:19.925931  112807 scheduler.go:477] Attempting to schedule pod: volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch
I0812 13:40:19.926038  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (1.916627ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42918]
I0812 13:40:19.926131  112807 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch", PVC "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-topomismatch" on node "node-1"
I0812 13:40:19.926174  112807 scheduler_binder.go:723] Node "node-1" cannot satisfy provisioning topology requirements of claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-topomismatch"
I0812 13:40:19.926223  112807 factory.go:557] Unable to schedule volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch: no fit: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.; waiting
I0812 13:40:19.926274  112807 factory.go:631] Updating pod condition for volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch to (PodScheduled==False, Reason=Unschedulable)
I0812 13:40:19.927995  112807 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (1.303152ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:19.928580  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-topomismatch/status: (2.014608ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42918]
I0812 13:40:19.929405  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-topomismatch: (1.152611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
E0812 13:40:19.929658  112807 factory.go:597] pod is already present in the activeQ
I0812 13:40:19.930286  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-topomismatch: (1.126425ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42918]
I0812 13:40:19.930767  112807 generic_scheduler.go:337] Preemption will not help schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch on any node.
I0812 13:40:19.931014  112807 scheduling_queue.go:830] About to try and schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch
I0812 13:40:19.931032  112807 scheduler.go:477] Attempting to schedule pod: volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch
I0812 13:40:19.931229  112807 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch", PVC "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-topomismatch" on node "node-1"
I0812 13:40:19.931270  112807 scheduler_binder.go:723] Node "node-1" cannot satisfy provisioning topology requirements of claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-topomismatch"
I0812 13:40:19.931324  112807 factory.go:557] Unable to schedule volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch: no fit: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.; waiting
I0812 13:40:19.931429  112807 factory.go:631] Updating pod condition for volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch to (PodScheduled==False, Reason=Unschedulable)
I0812 13:40:19.932789  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-topomismatch: (1.051018ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:19.933048  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-topomismatch: (1.25344ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:19.933249  112807 generic_scheduler.go:337] Preemption will not help schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch on any node.
I0812 13:40:19.933714  112807 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (1.533218ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42922]
I0812 13:40:20.028768  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-topomismatch: (2.145156ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.031499  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-topomismatch: (1.848495ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.037001  112807 scheduling_queue.go:830] About to try and schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch
I0812 13:40:20.037078  112807 scheduler.go:473] Skip schedule deleting pod: volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch
I0812 13:40:20.039623  112807 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (2.033352ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.040337  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods: (8.258176ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.044638  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (3.587447ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.044779  112807 pv_controller_base.go:258] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-topomismatch" deleted
I0812 13:40:20.046582  112807 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.401783ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.065109  112807 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (17.571875ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.065350  112807 volume_binding_test.go:751] Running test wait one bound, one provisioned
I0812 13:40:20.067323  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.667905ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.073609  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (5.306989ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.076970  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.696892ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.079873  112807 httplog.go:90] POST /api/v1/persistentvolumes: (1.874029ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.083627  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (1.895088ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.084331  112807 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind", version 56706
I0812 13:40:20.084359  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:20.084376  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: no volume found
I0812 13:40:20.084394  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind] status: set phase Pending
I0812 13:40:20.084404  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind] status: phase Pending already set
I0812 13:40:20.084423  112807 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9", Name:"pvc-w-canbind", UID:"868fa062-15f6-4044-b893-92c38ef07d0a", APIVersion:"v1", ResourceVersion:"56706", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0812 13:40:20.084662  112807 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-canbind", version 56705
I0812 13:40:20.084719  112807 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Pending, bound to: "", boundByController: false
I0812 13:40:20.084735  112807 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0812 13:40:20.084742  112807 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Available
I0812 13:40:20.088236  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (3.895247ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.088873  112807 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision", version 56708
I0812 13:40:20.088962  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (4.291112ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.088968  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:20.089002  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: no volume found
I0812 13:40:20.089025  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: set phase Pending
I0812 13:40:20.089037  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: phase Pending already set
I0812 13:40:20.089056  112807 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9", Name:"pvc-canprovision", UID:"dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc", APIVersion:"v1", ResourceVersion:"56708", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0812 13:40:20.089115  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (3.900515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42932]
I0812 13:40:20.089471  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 56710
I0812 13:40:20.089507  112807 pv_controller.go:798] volume "pv-w-canbind" entered phase "Available"
I0812 13:40:20.090439  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods: (1.778274ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.090822  112807 scheduling_queue.go:830] About to try and schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canbind-or-provision
I0812 13:40:20.090853  112807 scheduler.go:477] Attempting to schedule pod: volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canbind-or-provision
I0812 13:40:20.091067  112807 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canbind-or-provision", PVC "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" on node "node-1"
I0812 13:40:20.091097  112807 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canbind-or-provision", PVC "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" on node "node-1"
I0812 13:40:20.091113  112807 scheduler_binder.go:733] Provisioning for claims of pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canbind-or-provision" that has no matching volumes on node "node-1" ...
I0812 13:40:20.091159  112807 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canbind-or-provision", node "node-1"
I0812 13:40:20.091185  112807 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind", version 56706
I0812 13:40:20.091194  112807 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision", version 56708
I0812 13:40:20.091248  112807 scheduler_binder.go:331] BindPodVolumes for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canbind-or-provision", node "node-1"
I0812 13:40:20.092143  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (2.457175ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.093503  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 56710
I0812 13:40:20.093536  112807 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "", boundByController: false
I0812 13:40:20.093552  112807 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0812 13:40:20.093559  112807 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Available
I0812 13:40:20.093565  112807 pv_controller.go:780] updating PersistentVolume[pv-w-canbind]: phase Available already set
I0812 13:40:20.093812  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-w-canbind: (2.259385ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42920]
I0812 13:40:20.094446  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" with version 56713
I0812 13:40:20.094481  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:20.094504  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: no volume found
I0812 13:40:20.094512  112807 pv_controller.go:1326] provisionClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: started
I0812 13:40:20.094527  112807 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind[868fa062-15f6-4044-b893-92c38ef07d0a]]
I0812 13:40:20.094579  112807 pv_controller.go:1372] provisionClaimOperation [volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind] started, class: "wait-xr5m"
I0812 13:40:20.098382  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-w-canbind: (3.494571ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.098653  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" with version 56715
I0812 13:40:20.098795  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (3.316379ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42932]
I0812 13:40:20.099816  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" with version 56715
I0812 13:40:20.099858  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:20.099886  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: no volume found
I0812 13:40:20.099893  112807 pv_controller.go:1326] provisionClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: started
I0812 13:40:20.099910  112807 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind[868fa062-15f6-4044-b893-92c38ef07d0a]]
I0812 13:40:20.099916  112807 pv_controller.go:1642] operation "provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind[868fa062-15f6-4044-b893-92c38ef07d0a]" is already running, skipping
I0812 13:40:20.099931  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56716
I0812 13:40:20.099940  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:20.099949  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: no volume found
I0812 13:40:20.099953  112807 pv_controller.go:1326] provisionClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: started
I0812 13:40:20.099958  112807 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]]
I0812 13:40:20.099992  112807 pv_controller.go:1372] provisionClaimOperation [volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] started, class: "wait-xr5m"
I0812 13:40:20.102014  112807 httplog.go:90] GET /api/v1/persistentvolumes/pvc-868fa062-15f6-4044-b893-92c38ef07d0a: (3.149875ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.102027  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.758186ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42932]
I0812 13:40:20.102431  112807 pv_controller.go:1476] volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" created
I0812 13:40:20.102513  112807 pv_controller.go:1493] provisionClaimOperation [volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: trying to save volume pvc-868fa062-15f6-4044-b893-92c38ef07d0a
I0812 13:40:20.102750  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56718
I0812 13:40:20.102786  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:20.102820  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: no volume found
I0812 13:40:20.102829  112807 pv_controller.go:1326] provisionClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: started
I0812 13:40:20.102850  112807 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]]
I0812 13:40:20.102867  112807 pv_controller.go:1642] operation "provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]" is already running, skipping
I0812 13:40:20.102994  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56718
I0812 13:40:20.106292  112807 httplog.go:90] GET /api/v1/persistentvolumes/pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc: (3.020156ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.107180  112807 pv_controller.go:1476] volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" created
I0812 13:40:20.107204  112807 pv_controller.go:1493] provisionClaimOperation [volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: trying to save volume pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc
I0812 13:40:20.107807  112807 httplog.go:90] POST /api/v1/persistentvolumes: (4.99889ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42932]
I0812 13:40:20.108040  112807 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a", version 56719
I0812 13:40:20.108148  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind (uid: 868fa062-15f6-4044-b893-92c38ef07d0a)", boundByController: true
I0812 13:40:20.108249  112807 pv_controller.go:1501] volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" saved
I0812 13:40:20.108393  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" with version 56719
I0812 13:40:20.108448  112807 pv_controller.go:1554] volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" provisioned for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind"
I0812 13:40:20.108817  112807 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9", Name:"pvc-w-canbind", UID:"868fa062-15f6-4044-b893-92c38ef07d0a", APIVersion:"v1", ResourceVersion:"56715", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-868fa062-15f6-4044-b893-92c38ef07d0a using kubernetes.io/mock-provisioner
I0812 13:40:20.108938  112807 httplog.go:90] POST /api/v1/persistentvolumes: (1.53682ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.108284  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind
I0812 13:40:20.109238  112807 pv_controller.go:555] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:20.109742  112807 pv_controller.go:603] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: volume not bound yet, waiting for syncClaim to fix it
I0812 13:40:20.109903  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" with version 56715
I0812 13:40:20.109996  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:20.110169  112807 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" found: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind (uid: 868fa062-15f6-4044-b893-92c38ef07d0a)", boundByController: true
I0812 13:40:20.110257  112807 pv_controller.go:931] binding volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind"
I0812 13:40:20.110378  112807 pv_controller.go:829] updating PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind"
I0812 13:40:20.110460  112807 pv_controller.go:841] updating PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind"
I0812 13:40:20.110535  112807 pv_controller.go:777] updating PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: set phase Bound
I0812 13:40:20.110863  112807 pv_controller.go:1501] volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" saved
I0812 13:40:20.110993  112807 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc", version 56720
I0812 13:40:20.111025  112807 pv_controller.go:1554] volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" provisioned for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:20.111069  112807 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9", Name:"pvc-canprovision", UID:"dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc", APIVersion:"v1", ResourceVersion:"56718", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc using kubernetes.io/mock-provisioner
I0812 13:40:20.112417  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" with version 56720
I0812 13:40:20.112482  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc)", boundByController: true
I0812 13:40:20.112493  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:20.112523  112807 pv_controller.go:555] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:20.112538  112807 pv_controller.go:603] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: volume not bound yet, waiting for syncClaim to fix it
I0812 13:40:20.115907  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (6.964834ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42932]
I0812 13:40:20.118532  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (2.035823ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42932]
I0812 13:40:20.122629  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-868fa062-15f6-4044-b893-92c38ef07d0a/status: (5.274625ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.122924  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" with version 56727
I0812 13:40:20.123112  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind (uid: 868fa062-15f6-4044-b893-92c38ef07d0a)", boundByController: true
I0812 13:40:20.123246  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind
I0812 13:40:20.123332  112807 pv_controller.go:555] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:20.123347  112807 pv_controller.go:603] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: volume not bound yet, waiting for syncClaim to fix it
I0812 13:40:20.123441  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" with version 56727
I0812 13:40:20.123567  112807 pv_controller.go:798] volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" entered phase "Bound"
I0812 13:40:20.123651  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: binding to "pvc-868fa062-15f6-4044-b893-92c38ef07d0a"
I0812 13:40:20.123769  112807 pv_controller.go:901] volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind"
I0812 13:40:20.127783  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-w-canbind: (3.095051ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.128051  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" with version 56728
I0812 13:40:20.128093  112807 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: bound to "pvc-868fa062-15f6-4044-b893-92c38ef07d0a"
I0812 13:40:20.128103  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind] status: set phase Bound
I0812 13:40:20.130562  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-w-canbind/status: (2.045611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.130827  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" with version 56729
I0812 13:40:20.130866  112807 pv_controller.go:742] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" entered phase "Bound"
I0812 13:40:20.130880  112807 pv_controller.go:957] volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind"
I0812 13:40:20.130900  112807 pv_controller.go:958] volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind (uid: 868fa062-15f6-4044-b893-92c38ef07d0a)", boundByController: true
I0812 13:40:20.130919  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" status after binding: phase: Bound, bound to: "pvc-868fa062-15f6-4044-b893-92c38ef07d0a", bindCompleted: true, boundByController: true
I0812 13:40:20.130968  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56718
I0812 13:40:20.130981  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:20.131016  112807 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" found: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc)", boundByController: true
I0812 13:40:20.131032  112807 pv_controller.go:931] binding volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:20.131042  112807 pv_controller.go:829] updating PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:20.131054  112807 pv_controller.go:841] updating PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:20.131061  112807 pv_controller.go:777] updating PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: set phase Bound
I0812 13:40:20.133432  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc/status: (2.072696ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.133766  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" with version 56730
I0812 13:40:20.133803  112807 pv_controller.go:798] volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" entered phase "Bound"
I0812 13:40:20.133818  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: binding to "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc"
I0812 13:40:20.133833  112807 pv_controller.go:901] volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:20.133831  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" with version 56730
I0812 13:40:20.133877  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc)", boundByController: true
I0812 13:40:20.133964  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:20.134009  112807 pv_controller.go:555] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:20.134063  112807 pv_controller.go:603] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: volume not bound yet, waiting for syncClaim to fix it
I0812 13:40:20.135920  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.870634ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.136350  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56732
I0812 13:40:20.136390  112807 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: bound to "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc"
I0812 13:40:20.136407  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: set phase Bound
I0812 13:40:20.139113  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision/status: (2.46629ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.139463  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56733
I0812 13:40:20.139599  112807 pv_controller.go:742] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" entered phase "Bound"
I0812 13:40:20.139755  112807 pv_controller.go:957] volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:20.139881  112807 pv_controller.go:958] volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc)", boundByController: true
I0812 13:40:20.139977  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc", bindCompleted: true, boundByController: true
I0812 13:40:20.140095  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" with version 56729
I0812 13:40:20.140178  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: phase: Bound, bound to: "pvc-868fa062-15f6-4044-b893-92c38ef07d0a", bindCompleted: true, boundByController: true
I0812 13:40:20.140249  112807 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" found: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind (uid: 868fa062-15f6-4044-b893-92c38ef07d0a)", boundByController: true
I0812 13:40:20.140321  112807 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: claim is already correctly bound
I0812 13:40:20.140388  112807 pv_controller.go:931] binding volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind"
I0812 13:40:20.140467  112807 pv_controller.go:829] updating PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind"
I0812 13:40:20.140548  112807 pv_controller.go:841] updating PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind"
I0812 13:40:20.140631  112807 pv_controller.go:777] updating PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: set phase Bound
I0812 13:40:20.140733  112807 pv_controller.go:780] updating PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: phase Bound already set
I0812 13:40:20.140801  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: binding to "pvc-868fa062-15f6-4044-b893-92c38ef07d0a"
I0812 13:40:20.140906  112807 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind]: already bound to "pvc-868fa062-15f6-4044-b893-92c38ef07d0a"
I0812 13:40:20.140999  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind] status: set phase Bound
I0812 13:40:20.141205  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind] status: phase Bound already set
I0812 13:40:20.141301  112807 pv_controller.go:957] volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind"
I0812 13:40:20.141379  112807 pv_controller.go:958] volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind (uid: 868fa062-15f6-4044-b893-92c38ef07d0a)", boundByController: true
I0812 13:40:20.141454  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" status after binding: phase: Bound, bound to: "pvc-868fa062-15f6-4044-b893-92c38ef07d0a", bindCompleted: true, boundByController: true
I0812 13:40:20.141543  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56733
I0812 13:40:20.142263  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Bound, bound to: "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc", bindCompleted: true, boundByController: true
I0812 13:40:20.142388  112807 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" found: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc)", boundByController: true
I0812 13:40:20.142473  112807 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: claim is already correctly bound
I0812 13:40:20.142543  112807 pv_controller.go:931] binding volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:20.142605  112807 pv_controller.go:829] updating PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:20.142676  112807 pv_controller.go:841] updating PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:20.142756  112807 pv_controller.go:777] updating PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: set phase Bound
I0812 13:40:20.142816  112807 pv_controller.go:780] updating PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: phase Bound already set
I0812 13:40:20.142878  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: binding to "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc"
I0812 13:40:20.142949  112807 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: already bound to "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc"
I0812 13:40:20.143035  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: set phase Bound
I0812 13:40:20.143148  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: phase Bound already set
I0812 13:40:20.143236  112807 pv_controller.go:957] volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:20.143360  112807 pv_controller.go:958] volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc)", boundByController: true
I0812 13:40:20.143461  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc", bindCompleted: true, boundByController: true
I0812 13:40:20.191433  112807 cache.go:676] Couldn't expire cache for pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canbind-or-provision. Binding is still in progress.
I0812 13:40:20.193458  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canbind-or-provision: (2.095564ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.293815  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canbind-or-provision: (2.333123ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.393970  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canbind-or-provision: (2.493565ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.493598  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canbind-or-provision: (2.242468ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.593357  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canbind-or-provision: (1.959403ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.693284  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canbind-or-provision: (1.938184ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.793745  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canbind-or-provision: (2.259239ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.893744  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canbind-or-provision: (2.404788ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:20.993299  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canbind-or-provision: (2.000087ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.093372  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canbind-or-provision: (2.081699ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.100019  112807 scheduler_binder.go:545] All PVCs for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canbind-or-provision" are bound
I0812 13:40:21.100125  112807 factory.go:622] Attempting to bind pod-pvc-canbind-or-provision to node-1
I0812 13:40:21.103784  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canbind-or-provision/binding: (3.180077ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.104378  112807 scheduler.go:614] pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canbind-or-provision is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0812 13:40:21.106856  112807 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (2.063345ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.193505  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canbind-or-provision: (2.016718ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.195580  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-w-canbind: (1.452084ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.197877  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.462493ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.199874  112807 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind: (1.396844ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.206869  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods: (6.544291ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.212480  112807 pv_controller_base.go:258] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" deleted
I0812 13:40:21.212540  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" with version 56730
I0812 13:40:21.212568  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc)", boundByController: true
I0812 13:40:21.212580  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:21.214589  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.209907ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42932]
I0812 13:40:21.214774  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (7.214925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.214898  112807 pv_controller_base.go:258] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" deleted
I0812 13:40:21.214915  112807 pv_controller.go:547] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision not found
I0812 13:40:21.214936  112807 pv_controller.go:575] volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" is released and reclaim policy "Delete" will be executed
I0812 13:40:21.214955  112807 pv_controller.go:777] updating PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: set phase Released
I0812 13:40:21.217724  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc/status: (2.333416ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.217981  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" with version 56812
I0812 13:40:21.218068  112807 pv_controller.go:798] volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" entered phase "Released"
I0812 13:40:21.218079  112807 pv_controller.go:1022] reclaimVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: policy is Delete
I0812 13:40:21.218101  112807 pv_controller.go:1631] scheduleOperation[delete-pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc[db5a905d-a49d-43b9-a7eb-c4411547e9e7]]
I0812 13:40:21.218132  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" with version 56727
I0812 13:40:21.218612  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind (uid: 868fa062-15f6-4044-b893-92c38ef07d0a)", boundByController: true
I0812 13:40:21.218646  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind
I0812 13:40:21.218846  112807 pv_controller.go:1146] deleteVolumeOperation [pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc] started
I0812 13:40:21.221350  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-w-canbind: (2.27112ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.222488  112807 pv_controller.go:547] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind not found
I0812 13:40:21.222518  112807 pv_controller.go:575] volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" is released and reclaim policy "Delete" will be executed
I0812 13:40:21.222529  112807 pv_controller.go:777] updating PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: set phase Released
I0812 13:40:21.223504  112807 httplog.go:90] GET /api/v1/persistentvolumes/pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc: (2.703669ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42944]
I0812 13:40:21.224013  112807 pv_controller.go:1250] isVolumeReleased[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: volume is released
I0812 13:40:21.224271  112807 pv_controller.go:1285] doDeleteVolume [pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]
I0812 13:40:21.224398  112807 pv_controller.go:1316] volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" deleted
I0812 13:40:21.224493  112807 pv_controller.go:1193] deleteVolumeOperation [pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: success
I0812 13:40:21.225793  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-868fa062-15f6-4044-b893-92c38ef07d0a/status: (2.874092ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.226035  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" with version 56813
I0812 13:40:21.226065  112807 pv_controller.go:798] volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" entered phase "Released"
I0812 13:40:21.226080  112807 pv_controller.go:1022] reclaimVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: policy is Delete
I0812 13:40:21.226104  112807 pv_controller.go:1631] scheduleOperation[delete-pvc-868fa062-15f6-4044-b893-92c38ef07d0a[43093fd4-e3e2-4a0a-a117-49cbfaec3069]]
I0812 13:40:21.226141  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" with version 56812
I0812 13:40:21.226170  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: phase: Released, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc)", boundByController: true
I0812 13:40:21.226194  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:21.226216  112807 pv_controller.go:547] synchronizing PersistentVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision not found
I0812 13:40:21.226223  112807 pv_controller.go:1022] reclaimVolume[pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc]: policy is Delete
I0812 13:40:21.226234  112807 pv_controller.go:1631] scheduleOperation[delete-pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc[db5a905d-a49d-43b9-a7eb-c4411547e9e7]]
I0812 13:40:21.226241  112807 pv_controller.go:1642] operation "delete-pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc[db5a905d-a49d-43b9-a7eb-c4411547e9e7]" is already running, skipping
I0812 13:40:21.226276  112807 pv_controller.go:1146] deleteVolumeOperation [pvc-868fa062-15f6-4044-b893-92c38ef07d0a] started
I0812 13:40:21.226586  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" with version 56813
I0812 13:40:21.226624  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: phase: Released, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind (uid: 868fa062-15f6-4044-b893-92c38ef07d0a)", boundByController: true
I0812 13:40:21.226636  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind
I0812 13:40:21.226658  112807 pv_controller.go:547] synchronizing PersistentVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind not found
I0812 13:40:21.226666  112807 pv_controller.go:1022] reclaimVolume[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: policy is Delete
I0812 13:40:21.226682  112807 pv_controller.go:1631] scheduleOperation[delete-pvc-868fa062-15f6-4044-b893-92c38ef07d0a[43093fd4-e3e2-4a0a-a117-49cbfaec3069]]
I0812 13:40:21.227066  112807 pv_controller.go:1642] operation "delete-pvc-868fa062-15f6-4044-b893-92c38ef07d0a[43093fd4-e3e2-4a0a-a117-49cbfaec3069]" is already running, skipping
I0812 13:40:21.227105  112807 pv_controller_base.go:212] volume "pv-w-canbind" deleted
I0812 13:40:21.228053  112807 httplog.go:90] GET /api/v1/persistentvolumes/pvc-868fa062-15f6-4044-b893-92c38ef07d0a: (1.517525ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.228459  112807 pv_controller.go:1250] isVolumeReleased[pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: volume is released
I0812 13:40:21.228485  112807 pv_controller.go:1285] doDeleteVolume [pvc-868fa062-15f6-4044-b893-92c38ef07d0a]
I0812 13:40:21.228511  112807 pv_controller.go:1316] volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" deleted
I0812 13:40:21.228521  112807 pv_controller.go:1193] deleteVolumeOperation [pvc-868fa062-15f6-4044-b893-92c38ef07d0a]: success
I0812 13:40:21.229567  112807 pv_controller_base.go:212] volume "pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc" deleted
I0812 13:40:21.229624  112807 pv_controller_base.go:396] deletion of claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" was already processed
I0812 13:40:21.230016  112807 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-dca6c6e0-06c0-4eff-8cb7-9fc3279b87fc: (5.261129ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42944]
I0812 13:40:21.231112  112807 pv_controller_base.go:212] volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" deleted
I0812 13:40:21.231114  112807 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-868fa062-15f6-4044-b893-92c38ef07d0a: (2.324743ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.231164  112807 pv_controller_base.go:396] deletion of claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind" was already processed
I0812 13:40:21.231374  112807 pv_controller.go:1200] failed to delete volume "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" from database: persistentvolumes "pvc-868fa062-15f6-4044-b893-92c38ef07d0a" not found
I0812 13:40:21.231823  112807 httplog.go:90] DELETE /api/v1/persistentvolumes: (16.445376ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42932]
I0812 13:40:21.243837  112807 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (11.320549ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.244077  112807 volume_binding_test.go:751] Running test one immediate pv prebound, one wait provisioned
I0812 13:40:21.246062  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.643042ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.248183  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.737746ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.250870  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.191588ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.253705  112807 httplog.go:90] POST /api/v1/persistentvolumes: (2.21557ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.253958  112807 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-i-prebound", version 56824
I0812 13:40:21.254010  112807 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound (uid: )", boundByController: false
I0812 13:40:21.254018  112807 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound
I0812 13:40:21.254027  112807 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0812 13:40:21.256527  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (2.20737ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42944]
I0812 13:40:21.257104  112807 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound", version 56825
I0812 13:40:21.257147  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:21.257194  112807 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound (uid: )", boundByController: false
I0812 13:40:21.257209  112807 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound"
I0812 13:40:21.257221  112807 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound"
I0812 13:40:21.257247  112807 pv_controller.go:849] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I0812 13:40:21.257419  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (3.08939ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.257714  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56826
I0812 13:40:21.257742  112807 pv_controller.go:798] volume "pv-i-prebound" entered phase "Available"
I0812 13:40:21.257970  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56826
I0812 13:40:21.258030  112807 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound (uid: )", boundByController: false
I0812 13:40:21.258037  112807 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound
I0812 13:40:21.258044  112807 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0812 13:40:21.258055  112807 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Available already set
I0812 13:40:21.259174  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (2.049916ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42944]
I0812 13:40:21.260243  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (2.501371ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.260532  112807 pv_controller.go:852] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0812 13:40:21.260602  112807 pv_controller.go:934] error binding volume "pv-i-prebound" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0812 13:40:21.260623  112807 pv_controller_base.go:246] could not sync claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0812 13:40:21.260681  112807 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision", version 56827
I0812 13:40:21.260716  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:21.260748  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: no volume found
I0812 13:40:21.260770  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: set phase Pending
I0812 13:40:21.260787  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: phase Pending already set
I0812 13:40:21.261004  112807 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9", Name:"pvc-canprovision", UID:"6387a028-4a7d-4c76-a386-39b82a81f6de", APIVersion:"v1", ResourceVersion:"56827", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0812 13:40:21.263124  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (1.876985ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.263770  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods: (4.124915ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42944]
I0812 13:40:21.263866  112807 scheduling_queue.go:830] About to try and schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned
I0812 13:40:21.264357  112807 scheduler.go:477] Attempting to schedule pod: volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned
E0812 13:40:21.264725  112807 factory.go:573] Error scheduling volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned: pod has unbound immediate PersistentVolumeClaims; retrying
I0812 13:40:21.264921  112807 factory.go:631] Updating pod condition for volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned to (PodScheduled==False, Reason=Unschedulable)
I0812 13:40:21.267063  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (1.74255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:21.267641  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned/status: (2.156032ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
E0812 13:40:21.268144  112807 scheduler.go:506] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0812 13:40:21.269393  112807 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (2.735716ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42948]
I0812 13:40:21.368975  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.589184ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:21.469516  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.716024ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:21.569229  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.729359ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:21.668855  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.403411ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:21.769493  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.88309ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:21.868575  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.214374ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:21.969913  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (3.558437ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:22.069635  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.33736ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:22.168394  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (1.948841ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:22.269034  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.602126ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:22.368872  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.398228ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:22.468702  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.351222ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:22.568772  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.385232ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:22.668459  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.083487ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:22.768880  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.499585ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:22.868515  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.100132ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:22.968902  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.491959ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:23.068584  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.1752ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:23.168884  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.428713ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:23.268728  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.289569ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:23.369021  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.633093ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:23.468522  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.133948ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:23.568926  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.429096ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:23.668822  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.348799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:23.768959  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.549774ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:23.868655  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.267549ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:23.968726  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.440213ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:24.068604  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.200134ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:24.168595  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.193275ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:24.268986  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.453661ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:24.368810  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.405166ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:24.469270  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.785868ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:24.568681  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.17388ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:24.668652  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.11303ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:24.769208  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.499313ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:24.868729  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.322866ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:24.969200  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.68658ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:25.068826  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.208031ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:25.168960  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.21877ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:25.268676  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.221117ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:25.368572  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.037735ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:25.469098  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.587454ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:25.568458  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.056827ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:25.668741  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.302057ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:25.768668  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.287359ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:25.868463  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.175814ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:25.968826  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.44267ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:26.068955  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.56186ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:26.169041  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.603683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:26.268813  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.432551ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:26.368667  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.311742ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:26.468703  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.310067ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:26.568794  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.351029ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:26.669548  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.858283ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:26.768745  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.372575ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:26.868816  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.348524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:26.969481  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.724964ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:27.068908  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.422103ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:27.168382  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (1.95598ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:27.268806  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.413976ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:27.369591  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (3.127125ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:27.468623  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.190751ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:27.568639  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.292999ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:27.668836  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.426874ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:27.768482  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.158127ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:27.868494  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.048385ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:27.969177  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.67244ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:28.068455  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.014886ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:28.168838  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.336523ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:28.220048  112807 httplog.go:90] GET /api/v1/namespaces/default: (2.12389ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:28.222648  112807 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.565514ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:28.225327  112807 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.799043ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:28.268823  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.432375ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:28.368942  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.271894ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:28.469079  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.561739ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:28.569119  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.665183ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:28.668842  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.369804ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:28.769191  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.560226ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:28.869023  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.593395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:28.968748  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.338619ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:29.069050  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.371959ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:29.168781  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.261071ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:29.268453  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.108367ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:29.368662  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.250474ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:29.468612  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.185276ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:29.568579  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.156959ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:29.668516  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.12545ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:29.768880  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.304673ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:29.868291  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (1.888069ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:29.968392  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (1.983684ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:30.068757  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.198862ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:30.168532  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.110002ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:30.268509  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.162368ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:30.368495  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.049007ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:30.468914  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.514278ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:30.569006  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.46779ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:30.668533  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.150332ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:30.769167  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.497362ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:30.869018  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.584324ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:30.968482  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.098038ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:31.068721  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.123929ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:31.168548  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.161512ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:31.268455  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (1.976163ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:31.368675  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.251836ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:31.469047  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.445178ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:31.568841  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.429601ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:31.668835  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.441809ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:31.768507  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.048499ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:31.869178  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.690623ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:31.968852  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.27071ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:32.068623  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.217876ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:32.169115  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.443371ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:32.290257  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (23.667874ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:32.369002  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.700101ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:32.468611  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.257631ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:32.569012  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.522479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:32.668758  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.369039ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:32.768926  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.449714ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:32.868868  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.35919ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:32.968646  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.278311ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:33.068385  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.076239ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:33.168728  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.328507ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:33.268895  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.345545ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:33.368814  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.380296ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:33.468881  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.423454ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:33.569262  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.823156ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:33.669319  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.776403ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:33.769174  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.700925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:33.868919  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.424104ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:33.968882  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.434035ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.068986  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.545501ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.169247  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.830565ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.269756  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.902171ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.369482  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.981939ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.468752  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.24135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.504492  112807 pv_controller_base.go:419] resyncing PV controller
I0812 13:40:34.504745  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56826
I0812 13:40:34.504934  112807 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound (uid: )", boundByController: false
I0812 13:40:34.504777  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound" with version 56825
I0812 13:40:34.505183  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:34.505067  112807 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound
I0812 13:40:34.505277  112807 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Available, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound (uid: )", boundByController: false
I0812 13:40:34.505317  112807 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound"
I0812 13:40:34.505318  112807 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0812 13:40:34.505340  112807 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Available already set
I0812 13:40:34.505328  112807 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound"
I0812 13:40:34.505388  112807 pv_controller.go:849] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I0812 13:40:34.508664  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (2.732478ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.509109  112807 scheduling_queue.go:830] About to try and schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned
I0812 13:40:34.509134  112807 scheduler.go:477] Attempting to schedule pod: volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned
I0812 13:40:34.509202  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56893
I0812 13:40:34.509233  112807 pv_controller.go:862] updating PersistentVolume[pv-i-prebound]: bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound"
I0812 13:40:34.509251  112807 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Bound
E0812 13:40:34.509322  112807 factory.go:573] Error scheduling volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned: pod has unbound immediate PersistentVolumeClaims; retrying
I0812 13:40:34.509355  112807 factory.go:631] Updating pod condition for volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned to (PodScheduled==False, Reason=Unschedulable)
E0812 13:40:34.509366  112807 scheduler.go:506] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0812 13:40:34.509780  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56893
I0812 13:40:34.509822  112807 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound (uid: 236a8b93-4a32-43fc-aef5-cd25e5466e03)", boundByController: false
I0812 13:40:34.509833  112807 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound
I0812 13:40:34.509850  112807 pv_controller.go:555] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:34.509863  112807 pv_controller.go:606] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0812 13:40:34.511519  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (1.455447ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:34.511902  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (2.359369ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.512169  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56894
I0812 13:40:34.512279  112807 pv_controller.go:798] volume "pv-i-prebound" entered phase "Bound"
I0812 13:40:34.512350  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0812 13:40:34.512495  112807 pv_controller.go:901] volume "pv-i-prebound" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound"
I0812 13:40:34.512559  112807 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (2.630685ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42916]
I0812 13:40:34.512742  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56894
I0812 13:40:34.512785  112807 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound (uid: 236a8b93-4a32-43fc-aef5-cd25e5466e03)", boundByController: false
I0812 13:40:34.512800  112807 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound
I0812 13:40:34.512819  112807 pv_controller.go:555] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:34.512835  112807 pv_controller.go:606] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0812 13:40:34.515064  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-i-pv-prebound: (1.768625ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.515316  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound" with version 56896
I0812 13:40:34.515347  112807 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound]: bound to "pv-i-prebound"
I0812 13:40:34.515360  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound] status: set phase Bound
I0812 13:40:34.517954  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-i-pv-prebound/status: (2.096328ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.518204  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound" with version 56897
I0812 13:40:34.518229  112807 pv_controller.go:742] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound" entered phase "Bound"
I0812 13:40:34.518245  112807 pv_controller.go:957] volume "pv-i-prebound" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound"
I0812 13:40:34.518264  112807 pv_controller.go:958] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound (uid: 236a8b93-4a32-43fc-aef5-cd25e5466e03)", boundByController: false
I0812 13:40:34.518285  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0812 13:40:34.518320  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56827
I0812 13:40:34.518334  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:34.518362  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: no volume found
I0812 13:40:34.518378  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: set phase Pending
I0812 13:40:34.518392  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: phase Pending already set
I0812 13:40:34.518402  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound" with version 56897
I0812 13:40:34.518411  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0812 13:40:34.518424  112807 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound (uid: 236a8b93-4a32-43fc-aef5-cd25e5466e03)", boundByController: false
I0812 13:40:34.518432  112807 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound]: claim is already correctly bound
I0812 13:40:34.518440  112807 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound"
I0812 13:40:34.518449  112807 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound"
I0812 13:40:34.518464  112807 pv_controller.go:841] updating PersistentVolume[pv-i-prebound]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound"
I0812 13:40:34.518471  112807 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Bound
I0812 13:40:34.518477  112807 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I0812 13:40:34.518485  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0812 13:40:34.518500  112807 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I0812 13:40:34.518505  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound] status: set phase Bound
I0812 13:40:34.518519  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound] status: phase Bound already set
I0812 13:40:34.518534  112807 pv_controller.go:957] volume "pv-i-prebound" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound"
I0812 13:40:34.518546  112807 pv_controller.go:958] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound (uid: 236a8b93-4a32-43fc-aef5-cd25e5466e03)", boundByController: false
I0812 13:40:34.518555  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0812 13:40:34.518569  112807 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9", Name:"pvc-canprovision", UID:"6387a028-4a7d-4c76-a386-39b82a81f6de", APIVersion:"v1", ResourceVersion:"56827", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0812 13:40:34.521599  112807 httplog.go:90] PATCH /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events/pvc-canprovision.15ba3091577b4deb: (2.620241ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.568562  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.224395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.668792  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.374878ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.769505  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.879784ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.868781  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.239828ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:34.968997  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.621029ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:35.068775  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.340295ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:35.168305  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (1.924932ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:35.267939  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (1.601603ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:35.369129  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.562461ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:35.468109  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (1.773253ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:35.568898  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.462324ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:35.669389  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.983931ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:35.768467  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.070547ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:35.868540  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.18497ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:35.968767  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.374826ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:36.068761  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.390023ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:36.168630  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.202012ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:36.195229  112807 scheduling_queue.go:830] About to try and schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned
I0812 13:40:36.195272  112807 scheduler.go:477] Attempting to schedule pod: volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned
I0812 13:40:36.195488  112807 scheduler_binder.go:651] All bound volumes for Pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned" match with Node "node-1"
I0812 13:40:36.195518  112807 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned", PVC "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" on node "node-1"
I0812 13:40:36.195531  112807 scheduler_binder.go:733] Provisioning for claims of pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned" that has no matching volumes on node "node-1" ...
I0812 13:40:36.195597  112807 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned", node "node-1"
I0812 13:40:36.195620  112807 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision", version 56827
I0812 13:40:36.195658  112807 scheduler_binder.go:331] BindPodVolumes for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned", node "node-1"
I0812 13:40:36.199274  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (3.111582ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:36.199762  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56899
I0812 13:40:36.199810  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:36.199842  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: no volume found
I0812 13:40:36.199850  112807 pv_controller.go:1326] provisionClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: started
I0812 13:40:36.199912  112807 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[6387a028-4a7d-4c76-a386-39b82a81f6de]]
I0812 13:40:36.199968  112807 pv_controller.go:1372] provisionClaimOperation [volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] started, class: "wait-lz6g"
I0812 13:40:36.202810  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (2.522647ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:36.203064  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56900
I0812 13:40:36.203096  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:36.203117  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: no volume found
I0812 13:40:36.203124  112807 pv_controller.go:1326] provisionClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: started
I0812 13:40:36.203135  112807 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[6387a028-4a7d-4c76-a386-39b82a81f6de]]
I0812 13:40:36.203141  112807 pv_controller.go:1642] operation "provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[6387a028-4a7d-4c76-a386-39b82a81f6de]" is already running, skipping
I0812 13:40:36.203228  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56900
I0812 13:40:36.204769  112807 httplog.go:90] GET /api/v1/persistentvolumes/pvc-6387a028-4a7d-4c76-a386-39b82a81f6de: (1.251749ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:36.205774  112807 pv_controller.go:1476] volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" created
I0812 13:40:36.205816  112807 pv_controller.go:1493] provisionClaimOperation [volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: trying to save volume pvc-6387a028-4a7d-4c76-a386-39b82a81f6de
I0812 13:40:36.214165  112807 httplog.go:90] POST /api/v1/persistentvolumes: (8.04026ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:36.214437  112807 pv_controller.go:1501] volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" saved
I0812 13:40:36.214467  112807 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de", version 56901
I0812 13:40:36.214487  112807 pv_controller.go:1554] volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" provisioned for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:36.214520  112807 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9", Name:"pvc-canprovision", UID:"6387a028-4a7d-4c76-a386-39b82a81f6de", APIVersion:"v1", ResourceVersion:"56900", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-6387a028-4a7d-4c76-a386-39b82a81f6de using kubernetes.io/mock-provisioner
I0812 13:40:36.216126  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" with version 56901
I0812 13:40:36.216176  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: 6387a028-4a7d-4c76-a386-39b82a81f6de)", boundByController: true
I0812 13:40:36.216188  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:36.216204  112807 pv_controller.go:555] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:36.216218  112807 pv_controller.go:603] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: volume not bound yet, waiting for syncClaim to fix it
I0812 13:40:36.216255  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56900
I0812 13:40:36.216274  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:36.216305  112807 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" found: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: 6387a028-4a7d-4c76-a386-39b82a81f6de)", boundByController: true
I0812 13:40:36.216317  112807 pv_controller.go:931] binding volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:36.216327  112807 pv_controller.go:829] updating PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:36.216338  112807 pv_controller.go:841] updating PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:36.216346  112807 pv_controller.go:777] updating PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: set phase Bound
I0812 13:40:36.218476  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (3.732402ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:36.222408  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-6387a028-4a7d-4c76-a386-39b82a81f6de/status: (4.615527ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:36.222772  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" with version 56903
I0812 13:40:36.222843  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: 6387a028-4a7d-4c76-a386-39b82a81f6de)", boundByController: true
I0812 13:40:36.222880  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:36.222906  112807 pv_controller.go:555] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:36.222921  112807 pv_controller.go:603] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: volume not bound yet, waiting for syncClaim to fix it
I0812 13:40:36.223597  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" with version 56903
I0812 13:40:36.223636  112807 pv_controller.go:798] volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" entered phase "Bound"
I0812 13:40:36.223656  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: binding to "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de"
I0812 13:40:36.223680  112807 pv_controller.go:901] volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:36.228398  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (4.380091ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:36.228768  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56904
I0812 13:40:36.228823  112807 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: bound to "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de"
I0812 13:40:36.228837  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: set phase Bound
I0812 13:40:36.231972  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision/status: (2.88495ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:36.232469  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56905
I0812 13:40:36.232506  112807 pv_controller.go:742] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" entered phase "Bound"
I0812 13:40:36.232523  112807 pv_controller.go:957] volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:36.232543  112807 pv_controller.go:958] volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: 6387a028-4a7d-4c76-a386-39b82a81f6de)", boundByController: true
I0812 13:40:36.232600  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de", bindCompleted: true, boundByController: true
I0812 13:40:36.232658  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56905
I0812 13:40:36.232677  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Bound, bound to: "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de", bindCompleted: true, boundByController: true
I0812 13:40:36.232758  112807 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" found: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: 6387a028-4a7d-4c76-a386-39b82a81f6de)", boundByController: true
I0812 13:40:36.232768  112807 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: claim is already correctly bound
I0812 13:40:36.232776  112807 pv_controller.go:931] binding volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:36.232785  112807 pv_controller.go:829] updating PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:36.232800  112807 pv_controller.go:841] updating PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:36.232808  112807 pv_controller.go:777] updating PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: set phase Bound
I0812 13:40:36.232815  112807 pv_controller.go:780] updating PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: phase Bound already set
I0812 13:40:36.232822  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: binding to "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de"
I0812 13:40:36.232841  112807 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: already bound to "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de"
I0812 13:40:36.232853  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: set phase Bound
I0812 13:40:36.232868  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: phase Bound already set
I0812 13:40:36.232876  112807 pv_controller.go:957] volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:36.232888  112807 pv_controller.go:958] volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: 6387a028-4a7d-4c76-a386-39b82a81f6de)", boundByController: true
I0812 13:40:36.232900  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de", bindCompleted: true, boundByController: true
I0812 13:40:36.269098  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.554493ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:36.368947  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.458968ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:36.468909  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.396985ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:36.568591  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.197726ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:36.668857  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.389074ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:36.768949  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.366267ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:36.868577  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.256949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:36.969952  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (3.512034ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:37.068669  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.208664ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:37.168874  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.361753ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:37.195059  112807 cache.go:676] Couldn't expire cache for pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned. Binding is still in progress.
I0812 13:40:37.200101  112807 scheduler_binder.go:545] All PVCs for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned" are bound
I0812 13:40:37.200226  112807 factory.go:622] Attempting to bind pod-i-pv-prebound-w-provisioned to node-1
I0812 13:40:37.203662  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned/binding: (2.932236ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:37.204216  112807 scheduler.go:614] pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-pv-prebound-w-provisioned is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0812 13:40:37.207048  112807 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (2.239661ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:37.268495  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-pv-prebound-w-provisioned: (2.109661ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:37.270654  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-i-pv-prebound: (1.587001ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:37.272624  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.208578ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:37.274350  112807 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-prebound: (1.129928ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:37.280209  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods: (5.346238ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:37.286008  112807 pv_controller_base.go:258] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" deleted
I0812 13:40:37.286056  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" with version 56903
I0812 13:40:37.286129  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: 6387a028-4a7d-4c76-a386-39b82a81f6de)", boundByController: true
I0812 13:40:37.286141  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:37.287415  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.022303ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:37.289158  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (8.451902ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:37.289620  112807 pv_controller.go:547] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision not found
I0812 13:40:37.289656  112807 pv_controller.go:575] volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" is released and reclaim policy "Delete" will be executed
I0812 13:40:37.289672  112807 pv_controller.go:777] updating PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: set phase Released
I0812 13:40:37.291060  112807 pv_controller_base.go:258] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound" deleted
I0812 13:40:37.292820  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-6387a028-4a7d-4c76-a386-39b82a81f6de/status: (2.835072ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:37.293443  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" with version 56912
I0812 13:40:37.293497  112807 pv_controller.go:798] volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" entered phase "Released"
I0812 13:40:37.293510  112807 pv_controller.go:1022] reclaimVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: policy is Delete
I0812 13:40:37.293532  112807 pv_controller.go:1631] scheduleOperation[delete-pvc-6387a028-4a7d-4c76-a386-39b82a81f6de[2e29791e-5689-412d-b99a-fcf4b4016d4d]]
I0812 13:40:37.293567  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 56894
I0812 13:40:37.293588  112807 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound (uid: 236a8b93-4a32-43fc-aef5-cd25e5466e03)", boundByController: false
I0812 13:40:37.293596  112807 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound
I0812 13:40:37.293614  112807 pv_controller.go:547] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound not found
I0812 13:40:37.293624  112807 pv_controller.go:575] volume "pv-i-prebound" is released and reclaim policy "Retain" will be executed
I0812 13:40:37.293631  112807 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Released
I0812 13:40:37.293738  112807 pv_controller.go:1146] deleteVolumeOperation [pvc-6387a028-4a7d-4c76-a386-39b82a81f6de] started
I0812 13:40:37.295491  112807 httplog.go:90] GET /api/v1/persistentvolumes/pvc-6387a028-4a7d-4c76-a386-39b82a81f6de: (1.035075ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.295515  112807 store.go:349] GuaranteedUpdate of /11762e59-1cdd-457b-89ef-c5f604a5ded0/persistentvolumes/pv-i-prebound failed because of a conflict, going to retry
I0812 13:40:37.295676  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (1.713225ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:37.295745  112807 pv_controller.go:1250] isVolumeReleased[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: volume is released
I0812 13:40:37.295759  112807 pv_controller.go:1285] doDeleteVolume [pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]
I0812 13:40:37.295790  112807 pv_controller.go:1316] volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" deleted
I0812 13:40:37.295800  112807 pv_controller.go:1193] deleteVolumeOperation [pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: success
I0812 13:40:37.297004  112807 pv_controller.go:790] updating PersistentVolume[pv-i-prebound]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": StorageError: invalid object, Code: 4, Key: /11762e59-1cdd-457b-89ef-c5f604a5ded0/persistentvolumes/pv-i-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 87370c2a-896d-46a4-b493-f43c68dde515, UID in object meta: 
I0812 13:40:37.297042  112807 pv_controller_base.go:202] could not sync volume "pv-i-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": StorageError: invalid object, Code: 4, Key: /11762e59-1cdd-457b-89ef-c5f604a5ded0/persistentvolumes/pv-i-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 87370c2a-896d-46a4-b493-f43c68dde515, UID in object meta: 
I0812 13:40:37.297075  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" with version 56912
I0812 13:40:37.297094  112807 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-6387a028-4a7d-4c76-a386-39b82a81f6de: (1.147668ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.297101  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: phase: Released, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: 6387a028-4a7d-4c76-a386-39b82a81f6de)", boundByController: true
I0812 13:40:37.297112  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:37.297127  112807 pv_controller.go:547] synchronizing PersistentVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision not found
I0812 13:40:37.297134  112807 pv_controller.go:1022] reclaimVolume[pvc-6387a028-4a7d-4c76-a386-39b82a81f6de]: policy is Delete
I0812 13:40:37.297150  112807 pv_controller.go:1631] scheduleOperation[delete-pvc-6387a028-4a7d-4c76-a386-39b82a81f6de[2e29791e-5689-412d-b99a-fcf4b4016d4d]]
I0812 13:40:37.297157  112807 pv_controller.go:1642] operation "delete-pvc-6387a028-4a7d-4c76-a386-39b82a81f6de[2e29791e-5689-412d-b99a-fcf4b4016d4d]" is already running, skipping
I0812 13:40:37.297171  112807 pv_controller_base.go:212] volume "pv-i-prebound" deleted
I0812 13:40:37.297194  112807 pv_controller_base.go:396] deletion of claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-i-pv-prebound" was already processed
I0812 13:40:37.297343  112807 pv_controller.go:1200] failed to delete volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" from database: persistentvolumes "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" not found
I0812 13:40:37.297528  112807 httplog.go:90] DELETE /api/v1/persistentvolumes: (7.633949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42986]
I0812 13:40:37.297948  112807 pv_controller_base.go:212] volume "pvc-6387a028-4a7d-4c76-a386-39b82a81f6de" deleted
I0812 13:40:37.297997  112807 pv_controller_base.go:396] deletion of claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" was already processed
I0812 13:40:37.307072  112807 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.628393ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.307295  112807 volume_binding_test.go:751] Running test wait one pv prebound, one provisioned
I0812 13:40:37.308965  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.443697ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.310841  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.467979ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.313218  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.960148ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.316202  112807 httplog.go:90] POST /api/v1/persistentvolumes: (2.018831ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.316674  112807 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-prebound", version 56921
I0812 13:40:37.316789  112807 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound (uid: )", boundByController: false
I0812 13:40:37.316799  112807 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound
I0812 13:40:37.316807  112807 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Available
I0812 13:40:37.318891  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (1.888467ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.319260  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.254373ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:37.319757  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56922
I0812 13:40:37.319781  112807 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound", version 56923
I0812 13:40:37.319791  112807 pv_controller.go:798] volume "pv-w-prebound" entered phase "Available"
I0812 13:40:37.319804  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:37.319839  112807 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Available, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound (uid: )", boundByController: false
I0812 13:40:37.319848  112807 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound"
I0812 13:40:37.319859  112807 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound"
I0812 13:40:37.319861  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56922
I0812 13:40:37.319876  112807 pv_controller.go:849] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0812 13:40:37.319897  112807 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound (uid: )", boundByController: false
I0812 13:40:37.319907  112807 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound
I0812 13:40:37.319913  112807 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Available
I0812 13:40:37.319919  112807 pv_controller.go:780] updating PersistentVolume[pv-w-prebound]: phase Available already set
I0812 13:40:37.321825  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (2.007928ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:37.322938  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (2.691858ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.323284  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56925
I0812 13:40:37.323318  112807 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound (uid: e789454e-f846-4118-91f1-ff0eca550701)", boundByController: false
I0812 13:40:37.323324  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56925
I0812 13:40:37.323364  112807 pv_controller.go:862] updating PersistentVolume[pv-w-prebound]: bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound"
I0812 13:40:37.323375  112807 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0812 13:40:37.323327  112807 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound
I0812 13:40:37.323542  112807 pv_controller.go:555] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:37.323602  112807 pv_controller.go:606] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0812 13:40:37.324561  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods: (2.224936ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:37.324981  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.34401ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.325190  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56927
I0812 13:40:37.325212  112807 pv_controller.go:798] volume "pv-w-prebound" entered phase "Bound"
I0812 13:40:37.325226  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0812 13:40:37.325235  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56927
I0812 13:40:37.325243  112807 pv_controller.go:901] volume "pv-w-prebound" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound"
I0812 13:40:37.325266  112807 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound (uid: e789454e-f846-4118-91f1-ff0eca550701)", boundByController: false
I0812 13:40:37.325286  112807 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound
I0812 13:40:37.325300  112807 pv_controller.go:555] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:37.325331  112807 pv_controller.go:606] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0812 13:40:37.325599  112807 scheduling_queue.go:830] About to try and schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-w-pv-prebound-w-provisioned
I0812 13:40:37.325622  112807 scheduler.go:477] Attempting to schedule pod: volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-w-pv-prebound-w-provisioned
I0812 13:40:37.325878  112807 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-w-pv-prebound-w-provisioned", PVC "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" on node "node-1"
I0812 13:40:37.325908  112807 scheduler_binder.go:733] Provisioning for claims of pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-w-pv-prebound-w-provisioned" that has no matching volumes on node "node-1" ...
I0812 13:40:37.326023  112807 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-w-pv-prebound-w-provisioned", node "node-1"
I0812 13:40:37.326056  112807 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision", version 56924
I0812 13:40:37.326092  112807 scheduler_binder.go:331] BindPodVolumes for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-w-pv-prebound-w-provisioned", node "node-1"
I0812 13:40:37.326109  112807 scheduler_binder.go:399] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0812 13:40:37.327085  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-w-pv-prebound: (1.576996ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:37.327320  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound" with version 56928
I0812 13:40:37.327348  112807 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound]: bound to "pv-w-prebound"
I0812 13:40:37.327357  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound] status: set phase Bound
I0812 13:40:37.327523  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (1.206173ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.327804  112807 scheduler_binder.go:405] updating PersistentVolume[pv-w-prebound]: bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound"
I0812 13:40:37.329409  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.394561ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.329487  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-w-pv-prebound/status: (1.93817ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:37.329763  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound" with version 56929
I0812 13:40:37.329866  112807 pv_controller.go:742] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound" entered phase "Bound"
I0812 13:40:37.329918  112807 pv_controller.go:957] volume "pv-w-prebound" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound"
I0812 13:40:37.330090  112807 pv_controller.go:958] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound (uid: e789454e-f846-4118-91f1-ff0eca550701)", boundByController: false
I0812 13:40:37.330200  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0812 13:40:37.330478  112807 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision", version 56930
I0812 13:40:37.330586  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:37.330734  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: no volume found
I0812 13:40:37.330848  112807 pv_controller.go:1326] provisionClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: started
I0812 13:40:37.330948  112807 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[b668504c-184b-467f-a282-8831b001435f]]
I0812 13:40:37.331147  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound" with version 56929
I0812 13:40:37.331176  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound]: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0812 13:40:37.331197  112807 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound (uid: e789454e-f846-4118-91f1-ff0eca550701)", boundByController: false
I0812 13:40:37.331215  112807 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound]: claim is already correctly bound
I0812 13:40:37.331228  112807 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound"
I0812 13:40:37.331238  112807 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound"
I0812 13:40:37.331261  112807 pv_controller.go:841] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound"
I0812 13:40:37.331273  112807 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0812 13:40:37.331282  112807 pv_controller.go:780] updating PersistentVolume[pv-w-prebound]: phase Bound already set
I0812 13:40:37.331292  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0812 13:40:37.331315  112807 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound]: already bound to "pv-w-prebound"
I0812 13:40:37.331386  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound] status: set phase Bound
I0812 13:40:37.331437  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound] status: phase Bound already set
I0812 13:40:37.331493  112807 pv_controller.go:957] volume "pv-w-prebound" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound"
I0812 13:40:37.331565  112807 pv_controller.go:958] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound (uid: e789454e-f846-4118-91f1-ff0eca550701)", boundByController: false
I0812 13:40:37.331610  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0812 13:40:37.331663  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56930
I0812 13:40:37.331722  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:37.331767  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: no volume found
I0812 13:40:37.331794  112807 pv_controller.go:1326] provisionClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: started
I0812 13:40:37.331825  112807 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[b668504c-184b-467f-a282-8831b001435f]]
I0812 13:40:37.331853  112807 pv_controller.go:1642] operation "provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[b668504c-184b-467f-a282-8831b001435f]" is already running, skipping
I0812 13:40:37.331114  112807 pv_controller.go:1372] provisionClaimOperation [volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] started, class: "wait-vwgt"
I0812 13:40:37.334143  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.88913ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:37.334416  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56931
I0812 13:40:37.334450  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:37.334473  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: no volume found
I0812 13:40:37.334481  112807 pv_controller.go:1326] provisionClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: started
I0812 13:40:37.334495  112807 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[b668504c-184b-467f-a282-8831b001435f]]
I0812 13:40:37.334505  112807 pv_controller.go:1642] operation "provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[b668504c-184b-467f-a282-8831b001435f]" is already running, skipping
I0812 13:40:37.334575  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56931
I0812 13:40:37.336024  112807 httplog.go:90] GET /api/v1/persistentvolumes/pvc-b668504c-184b-467f-a282-8831b001435f: (1.11148ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:37.336492  112807 pv_controller.go:1476] volume "pvc-b668504c-184b-467f-a282-8831b001435f" for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" created
I0812 13:40:37.336647  112807 pv_controller.go:1493] provisionClaimOperation [volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: trying to save volume pvc-b668504c-184b-467f-a282-8831b001435f
I0812 13:40:37.338801  112807 httplog.go:90] POST /api/v1/persistentvolumes: (1.708894ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:37.339060  112807 pv_controller.go:1501] volume "pvc-b668504c-184b-467f-a282-8831b001435f" for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" saved
I0812 13:40:37.339135  112807 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-b668504c-184b-467f-a282-8831b001435f", version 56932
I0812 13:40:37.339162  112807 pv_controller.go:1554] volume "pvc-b668504c-184b-467f-a282-8831b001435f" provisioned for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:37.339283  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-b668504c-184b-467f-a282-8831b001435f" with version 56932
I0812 13:40:37.339405  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: b668504c-184b-467f-a282-8831b001435f)", boundByController: true
I0812 13:40:37.339580  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:37.339658  112807 pv_controller.go:555] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:37.339245  112807 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9", Name:"pvc-canprovision", UID:"b668504c-184b-467f-a282-8831b001435f", APIVersion:"v1", ResourceVersion:"56931", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b668504c-184b-467f-a282-8831b001435f using kubernetes.io/mock-provisioner
I0812 13:40:37.339777  112807 pv_controller.go:603] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: volume not bound yet, waiting for syncClaim to fix it
I0812 13:40:37.340110  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56931
I0812 13:40:37.340295  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:37.340419  112807 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: volume "pvc-b668504c-184b-467f-a282-8831b001435f" found: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: b668504c-184b-467f-a282-8831b001435f)", boundByController: true
I0812 13:40:37.340497  112807 pv_controller.go:931] binding volume "pvc-b668504c-184b-467f-a282-8831b001435f" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:37.340543  112807 pv_controller.go:829] updating PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:37.340633  112807 pv_controller.go:841] updating PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:37.340776  112807 pv_controller.go:777] updating PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: set phase Bound
I0812 13:40:37.342358  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (2.893373ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:37.342969  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-b668504c-184b-467f-a282-8831b001435f/status: (1.738515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.343304  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-b668504c-184b-467f-a282-8831b001435f" with version 56934
I0812 13:40:37.343357  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: b668504c-184b-467f-a282-8831b001435f)", boundByController: true
I0812 13:40:37.343370  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:37.343388  112807 pv_controller.go:555] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:37.343404  112807 pv_controller.go:603] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: volume not bound yet, waiting for syncClaim to fix it
I0812 13:40:37.343308  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-b668504c-184b-467f-a282-8831b001435f" with version 56934
I0812 13:40:37.343428  112807 pv_controller.go:798] volume "pvc-b668504c-184b-467f-a282-8831b001435f" entered phase "Bound"
I0812 13:40:37.343441  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: binding to "pvc-b668504c-184b-467f-a282-8831b001435f"
I0812 13:40:37.343460  112807 pv_controller.go:901] volume "pvc-b668504c-184b-467f-a282-8831b001435f" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:37.345849  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (2.024597ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.346087  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56935
I0812 13:40:37.346118  112807 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: bound to "pvc-b668504c-184b-467f-a282-8831b001435f"
I0812 13:40:37.346146  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: set phase Bound
I0812 13:40:37.348618  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision/status: (2.210435ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.348952  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56936
I0812 13:40:37.348988  112807 pv_controller.go:742] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" entered phase "Bound"
I0812 13:40:37.349001  112807 pv_controller.go:957] volume "pvc-b668504c-184b-467f-a282-8831b001435f" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:37.349019  112807 pv_controller.go:958] volume "pvc-b668504c-184b-467f-a282-8831b001435f" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: b668504c-184b-467f-a282-8831b001435f)", boundByController: true
I0812 13:40:37.349032  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-b668504c-184b-467f-a282-8831b001435f", bindCompleted: true, boundByController: true
I0812 13:40:37.349072  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 56936
I0812 13:40:37.349183  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Bound, bound to: "pvc-b668504c-184b-467f-a282-8831b001435f", bindCompleted: true, boundByController: true
I0812 13:40:37.349211  112807 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: volume "pvc-b668504c-184b-467f-a282-8831b001435f" found: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: b668504c-184b-467f-a282-8831b001435f)", boundByController: true
I0812 13:40:37.349220  112807 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: claim is already correctly bound
I0812 13:40:37.349227  112807 pv_controller.go:931] binding volume "pvc-b668504c-184b-467f-a282-8831b001435f" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:37.349235  112807 pv_controller.go:829] updating PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:37.349251  112807 pv_controller.go:841] updating PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:37.349259  112807 pv_controller.go:777] updating PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: set phase Bound
I0812 13:40:37.349265  112807 pv_controller.go:780] updating PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: phase Bound already set
I0812 13:40:37.349271  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: binding to "pvc-b668504c-184b-467f-a282-8831b001435f"
I0812 13:40:37.349390  112807 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: already bound to "pvc-b668504c-184b-467f-a282-8831b001435f"
I0812 13:40:37.349407  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: set phase Bound
I0812 13:40:37.349464  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: phase Bound already set
I0812 13:40:37.349482  112807 pv_controller.go:957] volume "pvc-b668504c-184b-467f-a282-8831b001435f" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:37.349534  112807 pv_controller.go:958] volume "pvc-b668504c-184b-467f-a282-8831b001435f" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: b668504c-184b-467f-a282-8831b001435f)", boundByController: true
I0812 13:40:37.349557  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-b668504c-184b-467f-a282-8831b001435f", bindCompleted: true, boundByController: true
I0812 13:40:37.428534  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-w-pv-prebound-w-provisioned: (2.362767ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.527807  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-w-pv-prebound-w-provisioned: (1.830465ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.628412  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-w-pv-prebound-w-provisioned: (2.310463ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.728336  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-w-pv-prebound-w-provisioned: (2.44635ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.828024  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-w-pv-prebound-w-provisioned: (1.809532ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:37.928361  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-w-pv-prebound-w-provisioned: (2.500322ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:38.027888  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-w-pv-prebound-w-provisioned: (2.020584ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:38.128566  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-w-pv-prebound-w-provisioned: (2.563932ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:38.195347  112807 cache.go:676] Couldn't expire cache for pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-w-pv-prebound-w-provisioned. Binding is still in progress.
I0812 13:40:38.221078  112807 httplog.go:90] GET /api/v1/namespaces/default: (1.85695ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:38.223509  112807 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.645372ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:38.225676  112807 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.516197ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:38.228099  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-w-pv-prebound-w-provisioned: (2.133662ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.328907  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-w-pv-prebound-w-provisioned: (2.566532ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.329912  112807 scheduler_binder.go:545] All PVCs for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-w-pv-prebound-w-provisioned" are bound
I0812 13:40:38.329992  112807 factory.go:622] Attempting to bind pod-w-pv-prebound-w-provisioned to node-1
I0812 13:40:38.333198  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-w-pv-prebound-w-provisioned/binding: (2.857328ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.333575  112807 scheduler.go:614] pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-w-pv-prebound-w-provisioned is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0812 13:40:38.336232  112807 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (2.060642ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.428295  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-w-pv-prebound-w-provisioned: (2.346852ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.431156  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-w-pv-prebound: (1.817017ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.433242  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.502243ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.435138  112807 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-prebound: (1.264051ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.442771  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods: (6.897715ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.448818  112807 pv_controller_base.go:258] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" deleted
I0812 13:40:38.449056  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-b668504c-184b-467f-a282-8831b001435f" with version 56934
I0812 13:40:38.449187  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: b668504c-184b-467f-a282-8831b001435f)", boundByController: true
I0812 13:40:38.449235  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:38.451109  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (7.679475ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.451131  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.299266ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:38.451311  112807 pv_controller_base.go:258] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound" deleted
I0812 13:40:38.451518  112807 pv_controller.go:547] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision not found
I0812 13:40:38.451540  112807 pv_controller.go:575] volume "pvc-b668504c-184b-467f-a282-8831b001435f" is released and reclaim policy "Delete" will be executed
I0812 13:40:38.451551  112807 pv_controller.go:777] updating PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: set phase Released
I0812 13:40:38.454479  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-b668504c-184b-467f-a282-8831b001435f/status: (2.562699ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.454700  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-b668504c-184b-467f-a282-8831b001435f" with version 56959
I0812 13:40:38.454808  112807 pv_controller.go:798] volume "pvc-b668504c-184b-467f-a282-8831b001435f" entered phase "Released"
I0812 13:40:38.454842  112807 pv_controller.go:1022] reclaimVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: policy is Delete
I0812 13:40:38.454886  112807 pv_controller.go:1631] scheduleOperation[delete-pvc-b668504c-184b-467f-a282-8831b001435f[521c9f2e-0c00-4068-b447-1d7952982b5c]]
I0812 13:40:38.454932  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 56927
I0812 13:40:38.454986  112807 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound (uid: e789454e-f846-4118-91f1-ff0eca550701)", boundByController: false
I0812 13:40:38.455020  112807 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound
I0812 13:40:38.455054  112807 pv_controller.go:547] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound not found
I0812 13:40:38.455104  112807 pv_controller.go:575] volume "pv-w-prebound" is released and reclaim policy "Retain" will be executed
I0812 13:40:38.455141  112807 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Released
I0812 13:40:38.455356  112807 pv_controller.go:1146] deleteVolumeOperation [pvc-b668504c-184b-467f-a282-8831b001435f] started
I0812 13:40:38.457153  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.466219ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.457153  112807 httplog.go:90] GET /api/v1/persistentvolumes/pvc-b668504c-184b-467f-a282-8831b001435f: (1.218875ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43112]
I0812 13:40:38.457378  112807 pv_controller.go:790] updating PersistentVolume[pv-w-prebound]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": StorageError: invalid object, Code: 4, Key: /11762e59-1cdd-457b-89ef-c5f604a5ded0/persistentvolumes/pv-w-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 7fa3367c-4f75-46cb-8a38-70b3c54c5ae5, UID in object meta: 
I0812 13:40:38.457393  112807 pv_controller_base.go:202] could not sync volume "pv-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": StorageError: invalid object, Code: 4, Key: /11762e59-1cdd-457b-89ef-c5f604a5ded0/persistentvolumes/pv-w-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 7fa3367c-4f75-46cb-8a38-70b3c54c5ae5, UID in object meta: 
I0812 13:40:38.457420  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-b668504c-184b-467f-a282-8831b001435f" with version 56959
I0812 13:40:38.457426  112807 pv_controller.go:1250] isVolumeReleased[pvc-b668504c-184b-467f-a282-8831b001435f]: volume is released
I0812 13:40:38.457439  112807 pv_controller.go:1285] doDeleteVolume [pvc-b668504c-184b-467f-a282-8831b001435f]
I0812 13:40:38.457470  112807 pv_controller.go:1316] volume "pvc-b668504c-184b-467f-a282-8831b001435f" deleted
I0812 13:40:38.457481  112807 pv_controller.go:1193] deleteVolumeOperation [pvc-b668504c-184b-467f-a282-8831b001435f]: success
I0812 13:40:38.457576  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: phase: Released, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: b668504c-184b-467f-a282-8831b001435f)", boundByController: true
I0812 13:40:38.457622  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:38.457733  112807 pv_controller.go:547] synchronizing PersistentVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision not found
I0812 13:40:38.457752  112807 pv_controller.go:1022] reclaimVolume[pvc-b668504c-184b-467f-a282-8831b001435f]: policy is Delete
I0812 13:40:38.457768  112807 pv_controller.go:1631] scheduleOperation[delete-pvc-b668504c-184b-467f-a282-8831b001435f[521c9f2e-0c00-4068-b447-1d7952982b5c]]
I0812 13:40:38.457775  112807 pv_controller.go:1642] operation "delete-pvc-b668504c-184b-467f-a282-8831b001435f[521c9f2e-0c00-4068-b447-1d7952982b5c]" is already running, skipping
I0812 13:40:38.457871  112807 pv_controller_base.go:212] volume "pv-w-prebound" deleted
I0812 13:40:38.457951  112807 pv_controller_base.go:396] deletion of claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-pv-prebound" was already processed
I0812 13:40:38.460013  112807 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-b668504c-184b-467f-a282-8831b001435f: (2.213214ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.460028  112807 pv_controller_base.go:212] volume "pvc-b668504c-184b-467f-a282-8831b001435f" deleted
I0812 13:40:38.460067  112807 pv_controller_base.go:396] deletion of claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" was already processed
I0812 13:40:38.460288  112807 pv_controller.go:1200] failed to delete volume "pvc-b668504c-184b-467f-a282-8831b001435f" from database: persistentvolumes "pvc-b668504c-184b-467f-a282-8831b001435f" not found
I0812 13:40:38.460342  112807 httplog.go:90] DELETE /api/v1/persistentvolumes: (8.429665ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42990]
I0812 13:40:38.469818  112807 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.888554ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.470158  112807 volume_binding_test.go:751] Running test immediate provisioned by controller
I0812 13:40:38.472423  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.047071ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.474318  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.259731ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.477205  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.170754ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.479652  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (1.839587ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.480018  112807 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned", version 56968
I0812 13:40:38.480064  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:38.480088  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: no volume found
I0812 13:40:38.480097  112807 pv_controller.go:1326] provisionClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: started
I0812 13:40:38.480115  112807 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned[000baa9f-c679-4842-85f7-4d513e6a36fc]]
I0812 13:40:38.480161  112807 pv_controller.go:1372] provisionClaimOperation [volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned] started, class: "immediate-gx6d"
I0812 13:40:38.482148  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods: (1.976713ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.482404  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned" with version 56970
I0812 13:40:38.482469  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:38.482486  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: no volume found
I0812 13:40:38.482494  112807 pv_controller.go:1326] provisionClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: started
I0812 13:40:38.482541  112807 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned[000baa9f-c679-4842-85f7-4d513e6a36fc]]
I0812 13:40:38.482548  112807 pv_controller.go:1642] operation "provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned[000baa9f-c679-4842-85f7-4d513e6a36fc]" is already running, skipping
I0812 13:40:38.482568  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-controller-provisioned: (2.206718ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43112]
I0812 13:40:38.482909  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned" with version 56970
I0812 13:40:38.483295  112807 scheduling_queue.go:830] About to try and schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound
I0812 13:40:38.483315  112807 scheduler.go:477] Attempting to schedule pod: volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound
E0812 13:40:38.483485  112807 factory.go:573] Error scheduling volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound: pod has unbound immediate PersistentVolumeClaims; retrying
I0812 13:40:38.483525  112807 factory.go:631] Updating pod condition for volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound to (PodScheduled==False, Reason=Unschedulable)
I0812 13:40:38.484567  112807 httplog.go:90] GET /api/v1/persistentvolumes/pvc-000baa9f-c679-4842-85f7-4d513e6a36fc: (1.39436ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43112]
I0812 13:40:38.484780  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (1.0169ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.484838  112807 pv_controller.go:1476] volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned" created
I0812 13:40:38.484864  112807 pv_controller.go:1493] provisionClaimOperation [volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: trying to save volume pvc-000baa9f-c679-4842-85f7-4d513e6a36fc
I0812 13:40:38.485739  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound/status: (1.590322ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:38.485963  112807 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (1.780116ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43116]
E0812 13:40:38.485977  112807 scheduler.go:506] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0812 13:40:38.486769  112807 httplog.go:90] POST /api/v1/persistentvolumes: (1.713515ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43112]
I0812 13:40:38.487561  112807 pv_controller.go:1501] volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned" saved
I0812 13:40:38.487586  112807 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc", version 56973
I0812 13:40:38.487607  112807 pv_controller.go:1554] volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" provisioned for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned"
I0812 13:40:38.487683  112807 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9", Name:"pvc-controller-provisioned", UID:"000baa9f-c679-4842-85f7-4d513e6a36fc", APIVersion:"v1", ResourceVersion:"56970", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-000baa9f-c679-4842-85f7-4d513e6a36fc using kubernetes.io/mock-provisioner
I0812 13:40:38.487859  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" with version 56973
I0812 13:40:38.487962  112807 scheduling_queue.go:830] About to try and schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound
I0812 13:40:38.487978  112807 scheduler.go:477] Attempting to schedule pod: volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound
I0812 13:40:38.488062  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned (uid: 000baa9f-c679-4842-85f7-4d513e6a36fc)", boundByController: true
I0812 13:40:38.488116  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned
I0812 13:40:38.488189  112807 pv_controller.go:555] synchronizing PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:38.488247  112807 pv_controller.go:603] synchronizing PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: volume not bound yet, waiting for syncClaim to fix it
I0812 13:40:38.488326  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned" with version 56970
I0812 13:40:38.488420  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:38.488506  112807 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" found: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned (uid: 000baa9f-c679-4842-85f7-4d513e6a36fc)", boundByController: true
I0812 13:40:38.488579  112807 pv_controller.go:931] binding volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned"
I0812 13:40:38.488619  112807 pv_controller.go:829] updating PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned"
I0812 13:40:38.488677  112807 pv_controller.go:841] updating PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned"
I0812 13:40:38.488758  112807 pv_controller.go:777] updating PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: set phase Bound
E0812 13:40:38.488987  112807 factory.go:573] Error scheduling volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound: pod has unbound immediate PersistentVolumeClaims; retrying
I0812 13:40:38.489026  112807 factory.go:631] Updating pod condition for volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound to (PodScheduled==False, Reason=Unschedulable)
E0812 13:40:38.489037  112807 scheduler.go:506] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0812 13:40:38.490500  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (2.660425ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:38.491000  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (1.485173ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:38.491355  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" with version 56975
I0812 13:40:38.491393  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned (uid: 000baa9f-c679-4842-85f7-4d513e6a36fc)", boundByController: true
I0812 13:40:38.491408  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned
I0812 13:40:38.491425  112807 pv_controller.go:555] synchronizing PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:38.491440  112807 pv_controller.go:603] synchronizing PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: volume not bound yet, waiting for syncClaim to fix it
I0812 13:40:38.492155  112807 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (2.644893ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43118]
I0812 13:40:38.492732  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-000baa9f-c679-4842-85f7-4d513e6a36fc/status: (3.706808ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0812 13:40:38.493033  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" with version 56975
I0812 13:40:38.493057  112807 pv_controller.go:798] volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" entered phase "Bound"
I0812 13:40:38.493068  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: binding to "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc"
I0812 13:40:38.493082  112807 pv_controller.go:901] volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned"
I0812 13:40:38.495048  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-controller-provisioned: (1.749563ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:38.495325  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned" with version 56977
I0812 13:40:38.495376  112807 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: bound to "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc"
I0812 13:40:38.495385  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned] status: set phase Bound
I0812 13:40:38.497416  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-controller-provisioned/status: (1.74386ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:38.497818  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned" with version 56978
I0812 13:40:38.497928  112807 pv_controller.go:742] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned" entered phase "Bound"
I0812 13:40:38.498013  112807 pv_controller.go:957] volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned"
I0812 13:40:38.498087  112807 pv_controller.go:958] volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned (uid: 000baa9f-c679-4842-85f7-4d513e6a36fc)", boundByController: true
I0812 13:40:38.498126  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned" status after binding: phase: Bound, bound to: "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc", bindCompleted: true, boundByController: true
I0812 13:40:38.498220  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned" with version 56978
I0812 13:40:38.498315  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: phase: Bound, bound to: "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc", bindCompleted: true, boundByController: true
I0812 13:40:38.498374  112807 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" found: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned (uid: 000baa9f-c679-4842-85f7-4d513e6a36fc)", boundByController: true
I0812 13:40:38.498413  112807 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: claim is already correctly bound
I0812 13:40:38.498454  112807 pv_controller.go:931] binding volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned"
I0812 13:40:38.498500  112807 pv_controller.go:829] updating PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned"
I0812 13:40:38.498537  112807 pv_controller.go:841] updating PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned"
I0812 13:40:38.498623  112807 pv_controller.go:777] updating PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: set phase Bound
I0812 13:40:38.498651  112807 pv_controller.go:780] updating PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: phase Bound already set
I0812 13:40:38.498677  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: binding to "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc"
I0812 13:40:38.498755  112807 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned]: already bound to "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc"
I0812 13:40:38.498802  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned] status: set phase Bound
I0812 13:40:38.498881  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned] status: phase Bound already set
I0812 13:40:38.498926  112807 pv_controller.go:957] volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned"
I0812 13:40:38.498968  112807 pv_controller.go:958] volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned (uid: 000baa9f-c679-4842-85f7-4d513e6a36fc)", boundByController: true
I0812 13:40:38.499033  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned" status after binding: phase: Bound, bound to: "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc", bindCompleted: true, boundByController: true
I0812 13:40:38.586026  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (2.932088ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:38.685500  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (2.136761ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:38.785363  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (2.301011ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:38.885000  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (1.89921ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:38.985006  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (2.010308ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:39.085047  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (1.995451ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:39.185380  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (2.237786ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:39.284859  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (1.84744ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:39.384982  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (1.913978ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:39.485141  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (2.072001ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:39.584998  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (2.006857ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:39.685581  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (2.597244ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:39.785085  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (1.959132ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:39.884627  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (1.720709ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:39.985213  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (2.153269ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.084560  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (1.550127ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.185441  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (2.394379ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.196193  112807 scheduling_queue.go:830] About to try and schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound
I0812 13:40:40.196255  112807 scheduler.go:477] Attempting to schedule pod: volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound
I0812 13:40:40.196462  112807 scheduler_binder.go:651] All bound volumes for Pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound" match with Node "node-1"
I0812 13:40:40.196559  112807 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound", node "node-1"
I0812 13:40:40.196570  112807 scheduler_binder.go:266] AssumePodVolumes for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound", node "node-1": all PVCs bound and nothing to do
I0812 13:40:40.196620  112807 factory.go:622] Attempting to bind pod-i-unbound to node-1
I0812 13:40:40.199668  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound/binding: (2.570411ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.200015  112807 scheduler.go:614] pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-i-unbound is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0812 13:40:40.202181  112807 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (1.74504ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.285580  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-i-unbound: (2.249434ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.287775  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-controller-provisioned: (1.533677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.295184  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods: (6.808831ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.299404  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (3.882997ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.299547  112807 pv_controller_base.go:258] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned" deleted
I0812 13:40:40.299588  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" with version 56975
I0812 13:40:40.299758  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned (uid: 000baa9f-c679-4842-85f7-4d513e6a36fc)", boundByController: true
I0812 13:40:40.299769  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned
I0812 13:40:40.301028  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-controller-provisioned: (1.010643ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.301269  112807 pv_controller.go:547] synchronizing PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned not found
I0812 13:40:40.301378  112807 pv_controller.go:575] volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" is released and reclaim policy "Delete" will be executed
I0812 13:40:40.301430  112807 pv_controller.go:777] updating PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: set phase Released
I0812 13:40:40.303515  112807 httplog.go:90] DELETE /api/v1/persistentvolumes: (3.494647ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.303730  112807 store.go:349] GuaranteedUpdate of /11762e59-1cdd-457b-89ef-c5f604a5ded0/persistentvolumes/pvc-000baa9f-c679-4842-85f7-4d513e6a36fc failed because of a conflict, going to retry
I0812 13:40:40.303932  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-000baa9f-c679-4842-85f7-4d513e6a36fc/status: (1.981011ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.304154  112807 pv_controller.go:790] updating PersistentVolume[pvc-000baa9f-c679-4842-85f7-4d513e6a36fc]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc": StorageError: invalid object, Code: 4, Key: /11762e59-1cdd-457b-89ef-c5f604a5ded0/persistentvolumes/pvc-000baa9f-c679-4842-85f7-4d513e6a36fc, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: a2755ffc-245a-44f5-88db-e8b857dcebae, UID in object meta: 
I0812 13:40:40.304184  112807 pv_controller_base.go:202] could not sync volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc": Operation cannot be fulfilled on persistentvolumes "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc": StorageError: invalid object, Code: 4, Key: /11762e59-1cdd-457b-89ef-c5f604a5ded0/persistentvolumes/pvc-000baa9f-c679-4842-85f7-4d513e6a36fc, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: a2755ffc-245a-44f5-88db-e8b857dcebae, UID in object meta: 
I0812 13:40:40.304228  112807 pv_controller_base.go:212] volume "pvc-000baa9f-c679-4842-85f7-4d513e6a36fc" deleted
I0812 13:40:40.304275  112807 pv_controller_base.go:396] deletion of claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-controller-provisioned" was already processed
I0812 13:40:40.313775  112807 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (9.754331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.314112  112807 volume_binding_test.go:751] Running test wait provisioned
I0812 13:40:40.316212  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.712771ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.318352  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.615329ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.320431  112807 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.61832ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.322752  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (1.736536ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.323074  112807 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision", version 57077
I0812 13:40:40.323098  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:40.323123  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: no volume found
I0812 13:40:40.323147  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: set phase Pending
I0812 13:40:40.323163  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: phase Pending already set
I0812 13:40:40.323216  112807 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9", Name:"pvc-canprovision", UID:"a3980d42-c794-48fd-833d-e77f96b8f28f", APIVersion:"v1", ResourceVersion:"57077", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0812 13:40:40.325094  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (1.623647ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.325230  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods: (2.024914ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.325555  112807 scheduling_queue.go:830] About to try and schedule pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canprovision
I0812 13:40:40.325658  112807 scheduler.go:477] Attempting to schedule pod: volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canprovision
I0812 13:40:40.326057  112807 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canprovision", PVC "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" on node "node-1"
I0812 13:40:40.326207  112807 scheduler_binder.go:733] Provisioning for claims of pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canprovision" that has no matching volumes on node "node-1" ...
I0812 13:40:40.326337  112807 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canprovision", node "node-1"
I0812 13:40:40.326420  112807 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision", version 57077
I0812 13:40:40.326546  112807 scheduler_binder.go:331] BindPodVolumes for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canprovision", node "node-1"
I0812 13:40:40.328891  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.975211ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.329141  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 57080
I0812 13:40:40.329176  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:40.329202  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: no volume found
I0812 13:40:40.329211  112807 pv_controller.go:1326] provisionClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: started
I0812 13:40:40.329231  112807 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[a3980d42-c794-48fd-833d-e77f96b8f28f]]
I0812 13:40:40.329309  112807 pv_controller.go:1372] provisionClaimOperation [volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] started, class: "wait-h94t"
I0812 13:40:40.331386  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.710326ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.331762  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 57081
I0812 13:40:40.331944  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 57081
I0812 13:40:40.331978  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:40.331999  112807 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: no volume found
I0812 13:40:40.332006  112807 pv_controller.go:1326] provisionClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: started
I0812 13:40:40.332022  112807 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[a3980d42-c794-48fd-833d-e77f96b8f28f]]
I0812 13:40:40.332033  112807 pv_controller.go:1642] operation "provision-volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision[a3980d42-c794-48fd-833d-e77f96b8f28f]" is already running, skipping
I0812 13:40:40.333976  112807 httplog.go:90] GET /api/v1/persistentvolumes/pvc-a3980d42-c794-48fd-833d-e77f96b8f28f: (1.646388ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.334314  112807 pv_controller.go:1476] volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" created
I0812 13:40:40.334345  112807 pv_controller.go:1493] provisionClaimOperation [volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: trying to save volume pvc-a3980d42-c794-48fd-833d-e77f96b8f28f
I0812 13:40:40.336479  112807 httplog.go:90] POST /api/v1/persistentvolumes: (1.879871ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.336804  112807 pv_controller.go:1501] volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" saved
I0812 13:40:40.336842  112807 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f", version 57082
I0812 13:40:40.336863  112807 pv_controller.go:1554] volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" provisioned for claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:40.337038  112807 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9", Name:"pvc-canprovision", UID:"a3980d42-c794-48fd-833d-e77f96b8f28f", APIVersion:"v1", ResourceVersion:"57081", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-a3980d42-c794-48fd-833d-e77f96b8f28f using kubernetes.io/mock-provisioner
I0812 13:40:40.337315  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" with version 57082
I0812 13:40:40.337428  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: a3980d42-c794-48fd-833d-e77f96b8f28f)", boundByController: true
I0812 13:40:40.337520  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:40.337618  112807 pv_controller.go:555] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:40.337720  112807 pv_controller.go:603] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: volume not bound yet, waiting for syncClaim to fix it
I0812 13:40:40.337944  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 57081
I0812 13:40:40.338056  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:40.338188  112807 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" found: phase: Pending, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: a3980d42-c794-48fd-833d-e77f96b8f28f)", boundByController: true
I0812 13:40:40.338356  112807 pv_controller.go:931] binding volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:40.338540  112807 pv_controller.go:829] updating PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:40.338654  112807 pv_controller.go:841] updating PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:40.338782  112807 pv_controller.go:777] updating PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: set phase Bound
I0812 13:40:40.338667  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (1.562023ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:40.341582  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-a3980d42-c794-48fd-833d-e77f96b8f28f/status: (2.388938ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.341904  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" with version 57084
I0812 13:40:40.341939  112807 pv_controller.go:798] volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" entered phase "Bound"
I0812 13:40:40.341954  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: binding to "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f"
I0812 13:40:40.341973  112807 pv_controller.go:901] volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:40.342327  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" with version 57084
I0812 13:40:40.342371  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: a3980d42-c794-48fd-833d-e77f96b8f28f)", boundByController: true
I0812 13:40:40.342384  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:40.342401  112807 pv_controller.go:555] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0812 13:40:40.342419  112807 pv_controller.go:603] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: volume not bound yet, waiting for syncClaim to fix it
I0812 13:40:40.344193  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.957434ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.344597  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 57085
I0812 13:40:40.344640  112807 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: bound to "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f"
I0812 13:40:40.344653  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: set phase Bound
I0812 13:40:40.347393  112807 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision/status: (1.755719ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.347770  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 57087
I0812 13:40:40.347803  112807 pv_controller.go:742] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" entered phase "Bound"
I0812 13:40:40.347823  112807 pv_controller.go:957] volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:40.347848  112807 pv_controller.go:958] volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: a3980d42-c794-48fd-833d-e77f96b8f28f)", boundByController: true
I0812 13:40:40.347867  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f", bindCompleted: true, boundByController: true
I0812 13:40:40.347918  112807 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" with version 57087
I0812 13:40:40.347943  112807 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: phase: Bound, bound to: "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f", bindCompleted: true, boundByController: true
I0812 13:40:40.347970  112807 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" found: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: a3980d42-c794-48fd-833d-e77f96b8f28f)", boundByController: true
I0812 13:40:40.347982  112807 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: claim is already correctly bound
I0812 13:40:40.347994  112807 pv_controller.go:931] binding volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:40.348007  112807 pv_controller.go:829] updating PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: binding to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:40.348035  112807 pv_controller.go:841] updating PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: already bound to "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:40.348046  112807 pv_controller.go:777] updating PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: set phase Bound
I0812 13:40:40.348056  112807 pv_controller.go:780] updating PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: phase Bound already set
I0812 13:40:40.348067  112807 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: binding to "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f"
I0812 13:40:40.348087  112807 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision]: already bound to "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f"
I0812 13:40:40.348098  112807 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: set phase Bound
I0812 13:40:40.348119  112807 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision] status: phase Bound already set
I0812 13:40:40.348132  112807 pv_controller.go:957] volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" bound to claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision"
I0812 13:40:40.348153  112807 pv_controller.go:958] volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" status after binding: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: a3980d42-c794-48fd-833d-e77f96b8f28f)", boundByController: true
I0812 13:40:40.348174  112807 pv_controller.go:959] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f", bindCompleted: true, boundByController: true
I0812 13:40:40.428630  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canprovision: (2.529298ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.528472  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canprovision: (2.18677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.628271  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canprovision: (2.037017ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.728195  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canprovision: (1.832566ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.829938  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canprovision: (3.660855ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:40.929035  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canprovision: (2.981173ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.028516  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canprovision: (2.352924ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.128207  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canprovision: (2.114147ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.196028  112807 cache.go:676] Couldn't expire cache for pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canprovision. Binding is still in progress.
I0812 13:40:41.228642  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canprovision: (2.465642ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.327927  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canprovision: (1.81673ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.329254  112807 scheduler_binder.go:545] All PVCs for pod "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canprovision" are bound
I0812 13:40:41.329307  112807 factory.go:622] Attempting to bind pod-pvc-canprovision to node-1
I0812 13:40:41.331768  112807 httplog.go:90] POST /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canprovision/binding: (2.137242ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.331967  112807 scheduler.go:614] pod volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-canprovision is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0812 13:40:41.334006  112807 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/events: (1.739871ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.428348  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods/pod-pvc-canprovision: (2.118413ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.431127  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.908201ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.439197  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods: (7.368456ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.444631  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (4.987734ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.445250  112807 pv_controller_base.go:258] claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" deleted
I0812 13:40:41.445319  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" with version 57084
I0812 13:40:41.445365  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: phase: Bound, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: a3980d42-c794-48fd-833d-e77f96b8f28f)", boundByController: true
I0812 13:40:41.445385  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:41.446999  112807 httplog.go:90] GET /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims/pvc-canprovision: (1.343419ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:41.447276  112807 pv_controller.go:547] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision not found
I0812 13:40:41.447300  112807 pv_controller.go:575] volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" is released and reclaim policy "Delete" will be executed
I0812 13:40:41.447312  112807 pv_controller.go:777] updating PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: set phase Released
I0812 13:40:41.449318  112807 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-a3980d42-c794-48fd-833d-e77f96b8f28f/status: (1.78641ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:41.449506  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" with version 57125
I0812 13:40:41.449535  112807 pv_controller.go:798] volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" entered phase "Released"
I0812 13:40:41.449545  112807 pv_controller.go:1022] reclaimVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: policy is Delete
I0812 13:40:41.449568  112807 pv_controller.go:1631] scheduleOperation[delete-pvc-a3980d42-c794-48fd-833d-e77f96b8f28f[778453ab-76dc-46b3-91df-446638e6d698]]
I0812 13:40:41.449592  112807 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" with version 57125
I0812 13:40:41.449611  112807 pv_controller.go:489] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: phase: Released, bound to: "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision (uid: a3980d42-c794-48fd-833d-e77f96b8f28f)", boundByController: true
I0812 13:40:41.449621  112807 pv_controller.go:514] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: volume is bound to claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision
I0812 13:40:41.449643  112807 pv_controller.go:547] synchronizing PersistentVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: claim volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision not found
I0812 13:40:41.449648  112807 pv_controller.go:1022] reclaimVolume[pvc-a3980d42-c794-48fd-833d-e77f96b8f28f]: policy is Delete
I0812 13:40:41.449655  112807 pv_controller.go:1631] scheduleOperation[delete-pvc-a3980d42-c794-48fd-833d-e77f96b8f28f[778453ab-76dc-46b3-91df-446638e6d698]]
I0812 13:40:41.449660  112807 pv_controller.go:1642] operation "delete-pvc-a3980d42-c794-48fd-833d-e77f96b8f28f[778453ab-76dc-46b3-91df-446638e6d698]" is already running, skipping
I0812 13:40:41.449741  112807 pv_controller.go:1146] deleteVolumeOperation [pvc-a3980d42-c794-48fd-833d-e77f96b8f28f] started
I0812 13:40:41.451141  112807 httplog.go:90] GET /api/v1/persistentvolumes/pvc-a3980d42-c794-48fd-833d-e77f96b8f28f: (1.160282ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43120]
I0812 13:40:41.451149  112807 pv_controller_base.go:212] volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" deleted
I0812 13:40:41.451332  112807 httplog.go:90] DELETE /api/v1/persistentvolumes: (6.255928ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.451421  112807 pv_controller_base.go:396] deletion of claim "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-canprovision" was already processed
I0812 13:40:41.451429  112807 pv_controller.go:1153] error reading persistent volume "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f": persistentvolumes "pvc-a3980d42-c794-48fd-833d-e77f96b8f28f" not found
I0812 13:40:41.460372  112807 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.498939ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.460616  112807 volume_binding_test.go:932] test cluster "volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9" start to tear down
I0812 13:40:41.462132  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pods: (1.174926ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.463768  112807 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/persistentvolumeclaims: (1.168868ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.465239  112807 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.03067ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.466802  112807 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (1.138453ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.467108  112807 pv_controller_base.go:298] Shutting down persistent volume controller
I0812 13:40:41.467133  112807 pv_controller_base.go:409] claim worker queue shutting down
I0812 13:40:41.467275  112807 pv_controller_base.go:352] volume worker queue shutting down
E0812 13:40:41.467499  112807 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0812 13:40:41.467653  112807 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=56522&timeout=7m43s&timeoutSeconds=463&watch=true: (23.272991886s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42750]
I0812 13:40:41.467727  112807 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=56522&timeout=6m6s&timeoutSeconds=366&watch=true: (22.061269952s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42902]
I0812 13:40:41.467738  112807 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=56522&timeout=7m24s&timeoutSeconds=444&watch=true: (23.272350598s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42744]
I0812 13:40:41.467805  112807 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=56522&timeout=6m35s&timeoutSeconds=395&watch=true: (23.269716436s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0812 13:40:41.467731  112807 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=56522&timeout=7m1s&timeoutSeconds=421&watch=true: (23.271522582s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0812 13:40:41.467864  112807 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=56522&timeout=9m24s&timeoutSeconds=564&watch=true: (23.272512835s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42754]
I0812 13:40:41.467906  112807 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=56522&timeout=6m12s&timeoutSeconds=372&watch=true: (22.062164413s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42912]
I0812 13:40:41.467930  112807 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=56522&timeout=5m32s&timeoutSeconds=332&watch=true: (23.274528704s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0812 13:40:41.467942  112807 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=56522&timeout=8m28s&timeoutSeconds=508&watch=true: (22.062561183s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42740]
I0812 13:40:41.467971  112807 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=56522&timeout=5m53s&timeoutSeconds=353&watch=true: (23.269830716s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42764]
I0812 13:40:41.468018  112807 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=56522&timeout=5m23s&timeoutSeconds=323&watch=true: (22.062646779s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42910]
I0812 13:40:41.468035  112807 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=56522&timeout=7m24s&timeoutSeconds=444&watch=true: (23.273981851s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0812 13:40:41.468127  112807 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=56522&timeout=7m17s&timeoutSeconds=437&watch=true: (23.272280361s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42748]
I0812 13:40:41.468149  112807 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=56522&timeout=8m41s&timeoutSeconds=521&watch=true: (22.062134987s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42904]
I0812 13:40:41.468175  112807 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=56522&timeout=9m13s&timeoutSeconds=553&watch=true: (23.272899417s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42752]
I0812 13:40:41.469158  112807 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=56522&timeout=7m59s&timeoutSeconds=479&watch=true: (23.273176864s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42762]
I0812 13:40:41.472223  112807 httplog.go:90] DELETE /api/v1/nodes: (4.139441ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.472850  112807 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0812 13:40:41.474555  112807 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.278765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
I0812 13:40:41.476770  112807 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.594422ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43114]
W0812 13:40:41.477485  112807 feature_gate.go:208] Setting GA feature gate PersistentLocalVolumes=true. It will be removed in a future release.
I0812 13:40:41.477511  112807 feature_gate.go:216] feature gates: &{map[PersistentLocalVolumes:true]}
--- FAIL: TestVolumeProvision (26.74s)
    volume_binding_test.go:1149: Provisoning annotaion on PVC volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind not bahaviors as expected: PVC volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pvc-w-canbind not expected to be provisioned, but found selected-node annotation
    volume_binding_test.go:1191: PV pv-w-canbind phase not Bound, got Available

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190812-132956.xml

Find volume-scheduling3bd5c2f7-c290-47d0-9fb7-33a37fa0b7e9/pod-pvc-topomismatch mentions in log files | View test history on testgrid


Show 2469 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 735 lines ...
W0812 13:24:31.126] I0812 13:24:31.124753   53084 controller_utils.go:1029] Waiting for caches to sync for stateful set controller
W0812 13:24:31.126] W0812 13:24:31.124754   53084 controllermanager.go:527] Skipping "nodeipam"
W0812 13:24:31.127] I0812 13:24:31.125413   53084 controllermanager.go:535] Started "pvc-protection"
W0812 13:24:31.127] I0812 13:24:31.125641   53084 controllermanager.go:535] Started "csrcleaner"
W0812 13:24:31.127] I0812 13:24:31.125906   53084 pvc_protection_controller.go:100] Starting PVC protection controller
W0812 13:24:31.128] I0812 13:24:31.126112   53084 controller_utils.go:1029] Waiting for caches to sync for PVC protection controller
W0812 13:24:31.128] E0812 13:24:31.126122   53084 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0812 13:24:31.128] W0812 13:24:31.126550   53084 controllermanager.go:527] Skipping "service"
W0812 13:24:31.128] I0812 13:24:31.125987   53084 cleaner.go:81] Starting CSR cleaner controller
W0812 13:24:31.129] I0812 13:24:31.124095   53084 namespace_controller.go:186] Starting namespace controller
W0812 13:24:31.130] I0812 13:24:31.129480   53084 controller_utils.go:1029] Waiting for caches to sync for namespace controller
W0812 13:24:31.131] I0812 13:24:31.131187   53084 controllermanager.go:535] Started "persistentvolume-binder"
W0812 13:24:31.132] I0812 13:24:31.132038   53084 controllermanager.go:535] Started "podgc"
... skipping 14 lines ...
W0812 13:24:31.444] I0812 13:24:31.443925   53084 attach_detach_controller.go:335] Starting attach detach controller
W0812 13:24:31.444] I0812 13:24:31.443961   53084 controllermanager.go:535] Started "ttl"
W0812 13:24:31.445] I0812 13:24:31.443967   53084 controller_utils.go:1029] Waiting for caches to sync for attach detach controller
W0812 13:24:31.445] I0812 13:24:31.444098   53084 ttl_controller.go:116] Starting TTL controller
W0812 13:24:31.445] I0812 13:24:31.444127   53084 controller_utils.go:1029] Waiting for caches to sync for TTL controller
W0812 13:24:31.445] I0812 13:24:31.444372   53084 node_lifecycle_controller.go:77] Sending events to api server
W0812 13:24:31.445] E0812 13:24:31.444418   53084 core.go:175] failed to start cloud node lifecycle controller: no cloud provider provided
W0812 13:24:31.445] W0812 13:24:31.444436   53084 controllermanager.go:527] Skipping "cloud-node-lifecycle"
W0812 13:24:31.446] I0812 13:24:31.445062   53084 controllermanager.go:535] Started "persistentvolume-expander"
W0812 13:24:31.446] I0812 13:24:31.445467   53084 controllermanager.go:535] Started "clusterrole-aggregation"
W0812 13:24:31.446] I0812 13:24:31.445618   53084 expand_controller.go:301] Starting expand controller
W0812 13:24:31.446] I0812 13:24:31.445819   53084 controller_utils.go:1029] Waiting for caches to sync for expand controller
W0812 13:24:31.446] I0812 13:24:31.445996   53084 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
... skipping 23 lines ...
W0812 13:24:31.452] I0812 13:24:31.451835   53084 controller_utils.go:1029] Waiting for caches to sync for deployment controller
W0812 13:24:31.452] I0812 13:24:31.452286   53084 controllermanager.go:535] Started "replicaset"
W0812 13:24:31.455] I0812 13:24:31.455044   53084 replica_set.go:182] Starting replicaset controller
W0812 13:24:31.467] I0812 13:24:31.466442   53084 controller_utils.go:1029] Waiting for caches to sync for ReplicaSet controller
W0812 13:24:31.546] I0812 13:24:31.546189   53084 controller_utils.go:1036] Caches are synced for expand controller
W0812 13:24:31.560] I0812 13:24:31.560261   53084 controller_utils.go:1036] Caches are synced for PV protection controller
W0812 13:24:31.658] W0812 13:24:31.657352   53084 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0812 13:24:31.731] I0812 13:24:31.730187   53084 controller_utils.go:1036] Caches are synced for namespace controller
W0812 13:24:31.745] I0812 13:24:31.744399   53084 controller_utils.go:1036] Caches are synced for TTL controller
W0812 13:24:31.758] I0812 13:24:31.757917   53084 controller_utils.go:1036] Caches are synced for service account controller
W0812 13:24:31.760] I0812 13:24:31.760196   49611 controller.go:606] quota admission added evaluator for: serviceaccounts
I0812 13:24:31.861] node/127.0.0.1 created
I0812 13:24:31.861] +++ [0812 13:24:31] Checking kubectl version
I0812 13:24:31.862] Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.0-alpha.3.155+0610bf0c7ed73a", GitCommit:"0610bf0c7ed73a8e8204cb870e20c724b24c0600", GitTreeState:"clean", BuildDate:"2019-08-12T13:22:23Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
I0812 13:24:31.862] Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.0-alpha.3.155+0610bf0c7ed73a", GitCommit:"0610bf0c7ed73a8e8204cb870e20c724b24c0600", GitTreeState:"clean", BuildDate:"2019-08-12T13:22:47Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
W0812 13:24:31.963] I0812 13:24:31.959047   53084 controller_utils.go:1036] Caches are synced for certificate controller
W0812 13:24:32.029] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0812 13:24:32.047] I0812 13:24:32.046771   53084 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0812 13:24:32.060] E0812 13:24:32.059704   53084 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0812 13:24:32.061] E0812 13:24:32.059838   53084 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0812 13:24:32.075] E0812 13:24:32.075100   53084 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0812 13:24:32.116] I0812 13:24:32.115912   53084 controller_utils.go:1036] Caches are synced for resource quota controller
W0812 13:24:32.125] I0812 13:24:32.124989   53084 controller_utils.go:1036] Caches are synced for stateful set controller
W0812 13:24:32.127] I0812 13:24:32.126515   53084 controller_utils.go:1036] Caches are synced for PVC protection controller
W0812 13:24:32.134] I0812 13:24:32.133447   53084 controller_utils.go:1036] Caches are synced for persistent volume controller
W0812 13:24:32.134] I0812 13:24:32.133579   53084 controller_utils.go:1036] Caches are synced for GC controller
W0812 13:24:32.142] I0812 13:24:32.141458   53084 controller_utils.go:1036] Caches are synced for garbage collector controller
... skipping 98 lines ...
I0812 13:24:36.146] +++ command: run_RESTMapper_evaluation_tests
I0812 13:24:36.163] +++ [0812 13:24:36] Creating namespace namespace-1565616276-28265
I0812 13:24:36.248] namespace/namespace-1565616276-28265 created
I0812 13:24:36.331] Context "test" modified.
I0812 13:24:36.340] +++ [0812 13:24:36] Testing RESTMapper
W0812 13:24:36.441] /go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 143: 53746 Terminated              kubectl proxy --port=0 --www=. --api-prefix="$1" > "${PROXY_PORT_FILE}" 2>&1
I0812 13:24:36.541] +++ [0812 13:24:36] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0812 13:24:36.542] +++ exit code: 0
I0812 13:24:36.608] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0812 13:24:36.609] bindings                                                                      true         Binding
I0812 13:24:36.610] componentstatuses                 cs                                          false        ComponentStatus
I0812 13:24:36.610] configmaps                        cm                                          true         ConfigMap
I0812 13:24:36.610] endpoints                         ep                                          true         Endpoints
... skipping 643 lines ...
I0812 13:24:56.896] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0812 13:24:57.086] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0812 13:24:57.187] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0812 13:24:57.382] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0812 13:24:57.490] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0812 13:24:57.580] (Bpod "valid-pod" force deleted
W0812 13:24:57.681] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0812 13:24:57.681] error: setting 'all' parameter but found a non empty selector. 
W0812 13:24:57.681] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0812 13:24:57.782] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:24:57.806] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0812 13:24:57.881] (Bnamespace/test-kubectl-describe-pod created
I0812 13:24:57.991] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0812 13:24:58.095] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
W0812 13:24:59.243] I0812 13:24:58.758433   49611 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
I0812 13:24:59.344] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0812 13:24:59.345] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0812 13:24:59.449] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0812 13:24:59.628] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:24:59.843] (Bpod/env-test-pod created
W0812 13:24:59.944] error: min-available and max-unavailable cannot be both specified
I0812 13:25:00.074] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0812 13:25:00.075] Name:         env-test-pod
I0812 13:25:00.075] Namespace:    test-kubectl-describe-pod
I0812 13:25:00.076] Priority:     0
I0812 13:25:00.076] Node:         <none>
I0812 13:25:00.076] Labels:       <none>
... skipping 173 lines ...
I0812 13:25:14.622] (Bpod/valid-pod patched
I0812 13:25:14.730] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0812 13:25:14.811] (Bpod/valid-pod patched
I0812 13:25:14.918] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0812 13:25:15.105] (Bpod/valid-pod patched
I0812 13:25:15.212] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0812 13:25:15.407] (B+++ [0812 13:25:15] "kubectl patch with resourceVersion 500" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0812 13:25:15.685] pod "valid-pod" deleted
I0812 13:25:15.699] pod/valid-pod replaced
I0812 13:25:15.811] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0812 13:25:15.999] (BSuccessful
I0812 13:25:16.000] message:error: --grace-period must have --force specified
I0812 13:25:16.000] has:\-\-grace-period must have \-\-force specified
I0812 13:25:16.183] Successful
I0812 13:25:16.184] message:error: --timeout must have --force specified
I0812 13:25:16.184] has:\-\-timeout must have \-\-force specified
I0812 13:25:16.365] node/node-v1-test created
W0812 13:25:16.466] W0812 13:25:16.364859   53084 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0812 13:25:16.567] node/node-v1-test replaced
I0812 13:25:16.645] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0812 13:25:16.729] (Bnode "node-v1-test" deleted
I0812 13:25:16.839] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0812 13:25:17.161] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0812 13:25:18.245] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 25 lines ...
I0812 13:25:18.830] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0812 13:25:18.918] (Bpod/valid-pod labeled
W0812 13:25:19.019] Edit cancelled, no changes made.
W0812 13:25:19.019] Edit cancelled, no changes made.
W0812 13:25:19.020] Edit cancelled, no changes made.
W0812 13:25:19.020] Edit cancelled, no changes made.
W0812 13:25:19.020] error: 'name' already has a value (valid-pod), and --overwrite is false
I0812 13:25:19.121] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0812 13:25:19.132] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0812 13:25:19.229] (Bpod "valid-pod" force deleted
W0812 13:25:19.331] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0812 13:25:19.432] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:25:19.433] (B+++ [0812 13:25:19] Creating namespace namespace-1565616319-5396
... skipping 82 lines ...
I0812 13:25:27.095] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0812 13:25:27.098] +++ working dir: /go/src/k8s.io/kubernetes
I0812 13:25:27.100] +++ command: run_kubectl_create_error_tests
I0812 13:25:27.114] +++ [0812 13:25:27] Creating namespace namespace-1565616327-21444
I0812 13:25:27.199] namespace/namespace-1565616327-21444 created
I0812 13:25:27.278] Context "test" modified.
I0812 13:25:27.287] +++ [0812 13:25:27] Testing kubectl create with error
W0812 13:25:27.388] Error: must specify one of -f and -k
W0812 13:25:27.389] 
W0812 13:25:27.390] Create a resource from a file or from stdin.
W0812 13:25:27.390] 
W0812 13:25:27.390]  JSON and YAML formats are accepted.
W0812 13:25:27.390] 
W0812 13:25:27.391] Examples:
... skipping 41 lines ...
W0812 13:25:27.404] 
W0812 13:25:27.404] Usage:
W0812 13:25:27.405]   kubectl create -f FILENAME [options]
W0812 13:25:27.405] 
W0812 13:25:27.405] Use "kubectl <command> --help" for more information about a given command.
W0812 13:25:27.405] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0812 13:25:27.552] +++ [0812 13:25:27] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0812 13:25:27.654] kubectl convert is DEPRECATED and will be removed in a future version.
W0812 13:25:27.655] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0812 13:25:27.755] +++ exit code: 0
I0812 13:25:27.788] Recording: run_kubectl_apply_tests
I0812 13:25:27.789] Running command: run_kubectl_apply_tests
I0812 13:25:27.812] 
... skipping 34 lines ...
I0812 13:25:30.511] +++ [0812 13:25:30] Creating namespace namespace-1565616330-32338
I0812 13:25:30.594] namespace/namespace-1565616330-32338 created
I0812 13:25:30.675] Context "test" modified.
I0812 13:25:30.685] +++ [0812 13:25:30] Testing kubectl run
I0812 13:25:30.789] run.sh:29: Successful get jobs {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:25:30.915] (Bjob.batch/pi created
W0812 13:25:31.015] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
W0812 13:25:31.016] kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0812 13:25:31.017] I0812 13:25:30.901405   49611 controller.go:606] quota admission added evaluator for: jobs.batch
W0812 13:25:31.017] I0812 13:25:30.921944   53084 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565616330-32338", Name:"pi", UID:"75c62a9d-4bf4-4dae-a174-4d9d3276edf9", APIVersion:"batch/v1", ResourceVersion:"507", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-z2ct5
I0812 13:25:31.118] run.sh:33: Successful get jobs {{range.items}}{{.metadata.name}}:{{end}}: pi:
I0812 13:25:31.167] (BSuccessful describe pods:
I0812 13:25:31.168] Name:           pi-z2ct5
... skipping 83 lines ...
I0812 13:25:33.401] Context "test" modified.
I0812 13:25:33.409] +++ [0812 13:25:33] Testing kubectl create filter
I0812 13:25:33.510] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:25:33.735] (Bpod/selector-test-pod created
I0812 13:25:33.856] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0812 13:25:33.962] (BSuccessful
I0812 13:25:33.963] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0812 13:25:33.963] has:pods "selector-test-pod-dont-apply" not found
I0812 13:25:34.053] pod "selector-test-pod" deleted
I0812 13:25:34.076] +++ exit code: 0
I0812 13:25:34.116] Recording: run_kubectl_apply_deployments_tests
I0812 13:25:34.117] Running command: run_kubectl_apply_deployments_tests
I0812 13:25:34.141] 
... skipping 29 lines ...
W0812 13:25:36.686] I0812 13:25:36.591635   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565616334-17752", Name:"nginx", UID:"2678f589-0da3-45ef-a987-93d60009c926", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7dbc4d9f to 3
W0812 13:25:36.687] I0812 13:25:36.603185   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616334-17752", Name:"nginx-7dbc4d9f", UID:"256e7e19-6723-4feb-8646-b7f84e6ff386", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-p9l6g
W0812 13:25:36.687] I0812 13:25:36.607476   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616334-17752", Name:"nginx-7dbc4d9f", UID:"256e7e19-6723-4feb-8646-b7f84e6ff386", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-l5zcq
W0812 13:25:36.688] I0812 13:25:36.609272   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616334-17752", Name:"nginx-7dbc4d9f", UID:"256e7e19-6723-4feb-8646-b7f84e6ff386", APIVersion:"apps/v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-2nn94
I0812 13:25:36.788] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0812 13:25:41.144] (BSuccessful
I0812 13:25:41.144] message:Error from server (Conflict): error when applying patch:
I0812 13:25:41.145] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565616334-17752\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0812 13:25:41.145] to:
I0812 13:25:41.145] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0812 13:25:41.145] Name: "nginx", Namespace: "namespace-1565616334-17752"
I0812 13:25:41.148] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565616334-17752\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-08-12T13:25:36Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]] "k:{\"type\":\"Progressing\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-12T13:25:36Z"] map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map["f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-08-12T13:25:36Z"]] "name":"nginx" "namespace":"namespace-1565616334-17752" "resourceVersion":"594" "selfLink":"/apis/apps/v1/namespaces/namespace-1565616334-17752/deployments/nginx" "uid":"2678f589-0da3-45ef-a987-93d60009c926"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-08-12T13:25:36Z" "lastUpdateTime":"2019-08-12T13:25:36Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-08-12T13:25:36Z" "lastUpdateTime":"2019-08-12T13:25:36Z" "message":"ReplicaSet \"nginx-7dbc4d9f\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0812 13:25:41.149] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0812 13:25:41.149] has:Error from server (Conflict)
W0812 13:25:41.400] I0812 13:25:41.399602   53084 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1565616324-17042
W0812 13:25:45.468] E0812 13:25:45.467839   53084 replica_set.go:450] Sync "namespace-1565616334-17752/nginx-7dbc4d9f" failed with Operation cannot be fulfilled on replicasets.apps "nginx-7dbc4d9f": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1565616334-17752/nginx-7dbc4d9f, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 256e7e19-6723-4feb-8646-b7f84e6ff386, UID in object meta: 
I0812 13:25:46.442] deployment.apps/nginx configured
W0812 13:25:46.543] I0812 13:25:46.447006   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565616334-17752", Name:"nginx", UID:"b4e970b5-09f3-4d51-8a35-868f71ad0aa7", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0812 13:25:46.545] I0812 13:25:46.450581   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616334-17752", Name:"nginx-594f77b9f6", UID:"269da6f7-89c5-480b-ac6e-e9228cc968c6", APIVersion:"apps/v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-tj5bf
W0812 13:25:46.545] I0812 13:25:46.454874   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616334-17752", Name:"nginx-594f77b9f6", UID:"269da6f7-89c5-480b-ac6e-e9228cc968c6", APIVersion:"apps/v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-k9c8d
W0812 13:25:46.546] I0812 13:25:46.455749   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616334-17752", Name:"nginx-594f77b9f6", UID:"269da6f7-89c5-480b-ac6e-e9228cc968c6", APIVersion:"apps/v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-jrbqv
I0812 13:25:46.646] Successful
... skipping 192 lines ...
I0812 13:25:54.033] +++ [0812 13:25:54] Creating namespace namespace-1565616354-5913
I0812 13:25:54.119] namespace/namespace-1565616354-5913 created
I0812 13:25:54.212] Context "test" modified.
I0812 13:25:54.219] +++ [0812 13:25:54] Testing kubectl get
I0812 13:25:54.323] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:25:54.419] (BSuccessful
I0812 13:25:54.419] message:Error from server (NotFound): pods "abc" not found
I0812 13:25:54.419] has:pods "abc" not found
I0812 13:25:54.518] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:25:54.615] (BSuccessful
I0812 13:25:54.616] message:Error from server (NotFound): pods "abc" not found
I0812 13:25:54.616] has:pods "abc" not found
I0812 13:25:54.711] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:25:54.801] (BSuccessful
I0812 13:25:54.801] message:{
I0812 13:25:54.801]     "apiVersion": "v1",
I0812 13:25:54.801]     "items": [],
... skipping 23 lines ...
I0812 13:25:55.180] has not:No resources found
I0812 13:25:55.276] Successful
I0812 13:25:55.277] message:NAME
I0812 13:25:55.277] has not:No resources found
I0812 13:25:55.384] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:25:55.497] (BSuccessful
I0812 13:25:55.498] message:error: the server doesn't have a resource type "foobar"
I0812 13:25:55.498] has not:No resources found
I0812 13:25:55.594] Successful
I0812 13:25:55.596] message:No resources found in namespace-1565616354-5913 namespace.
I0812 13:25:55.597] has:No resources found
I0812 13:25:55.699] Successful
I0812 13:25:55.700] message:
I0812 13:25:55.700] has not:No resources found
I0812 13:25:55.797] Successful
I0812 13:25:55.798] message:No resources found in namespace-1565616354-5913 namespace.
I0812 13:25:55.798] has:No resources found
I0812 13:25:55.899] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:25:55.992] (BSuccessful
I0812 13:25:55.992] message:Error from server (NotFound): pods "abc" not found
I0812 13:25:55.993] has:pods "abc" not found
I0812 13:25:55.993] FAIL!
I0812 13:25:55.993] message:Error from server (NotFound): pods "abc" not found
I0812 13:25:55.993] has not:List
I0812 13:25:55.993] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0812 13:25:56.118] Successful
I0812 13:25:56.118] message:I0812 13:25:56.067258   63647 loader.go:375] Config loaded from file:  /tmp/tmp.hRF8gmr9Cm/.kube/config
I0812 13:25:56.119] I0812 13:25:56.069268   63647 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0812 13:25:56.119] I0812 13:25:56.090844   63647 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0812 13:26:01.838] Successful
I0812 13:26:01.838] message:NAME    DATA   AGE
I0812 13:26:01.838] one     0      0s
I0812 13:26:01.839] three   0      0s
I0812 13:26:01.839] two     0      0s
I0812 13:26:01.839] STATUS    REASON          MESSAGE
I0812 13:26:01.839] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0812 13:26:01.840] has not:watch is only supported on individual resources
I0812 13:26:02.943] Successful
I0812 13:26:02.944] message:STATUS    REASON          MESSAGE
I0812 13:26:02.944] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0812 13:26:02.944] has not:watch is only supported on individual resources
I0812 13:26:02.950] +++ [0812 13:26:02] Creating namespace namespace-1565616362-483
I0812 13:26:03.042] namespace/namespace-1565616362-483 created
I0812 13:26:03.122] Context "test" modified.
I0812 13:26:03.235] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:26:03.426] (Bpod/valid-pod created
... skipping 104 lines ...
I0812 13:26:03.538] }
I0812 13:26:03.641] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0812 13:26:03.925] (B<no value>Successful
I0812 13:26:03.925] message:valid-pod:
I0812 13:26:03.926] has:valid-pod:
I0812 13:26:04.020] Successful
I0812 13:26:04.021] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0812 13:26:04.021] 	template was:
I0812 13:26:04.021] 		{.missing}
I0812 13:26:04.022] 	object given to jsonpath engine was:
I0812 13:26:04.024] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-08-12T13:26:03Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-08-12T13:26:03Z"}}, "name":"valid-pod", "namespace":"namespace-1565616362-483", "resourceVersion":"694", "selfLink":"/api/v1/namespaces/namespace-1565616362-483/pods/valid-pod", "uid":"ea1a6f48-7221-4d15-952d-483950a79fa8"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0812 13:26:04.024] has:missing is not found
I0812 13:26:04.118] Successful
I0812 13:26:04.119] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0812 13:26:04.119] 	template was:
I0812 13:26:04.119] 		{{.missing}}
I0812 13:26:04.120] 	raw data was:
I0812 13:26:04.121] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-08-12T13:26:03Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-08-12T13:26:03Z"}],"name":"valid-pod","namespace":"namespace-1565616362-483","resourceVersion":"694","selfLink":"/api/v1/namespaces/namespace-1565616362-483/pods/valid-pod","uid":"ea1a6f48-7221-4d15-952d-483950a79fa8"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0812 13:26:04.121] 	object given to template engine was:
I0812 13:26:04.122] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-08-12T13:26:03Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-08-12T13:26:03Z]] name:valid-pod namespace:namespace-1565616362-483 resourceVersion:694 selfLink:/api/v1/namespaces/namespace-1565616362-483/pods/valid-pod uid:ea1a6f48-7221-4d15-952d-483950a79fa8] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0812 13:26:04.123] has:map has no entry for key "missing"
W0812 13:26:04.223] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0812 13:26:05.228] Successful
I0812 13:26:05.228] message:NAME        READY   STATUS    RESTARTS   AGE
I0812 13:26:05.229] valid-pod   0/1     Pending   0          1s
I0812 13:26:05.229] STATUS      REASON          MESSAGE
I0812 13:26:05.229] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0812 13:26:05.229] has:STATUS
I0812 13:26:05.230] Successful
I0812 13:26:05.230] message:NAME        READY   STATUS    RESTARTS   AGE
I0812 13:26:05.230] valid-pod   0/1     Pending   0          1s
I0812 13:26:05.231] STATUS      REASON          MESSAGE
I0812 13:26:05.231] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0812 13:26:05.232] has:valid-pod
I0812 13:26:06.328] Successful
I0812 13:26:06.329] message:pod/valid-pod
I0812 13:26:06.329] has not:STATUS
I0812 13:26:06.332] Successful
I0812 13:26:06.332] message:pod/valid-pod
... skipping 144 lines ...
I0812 13:26:07.459] status:
I0812 13:26:07.459]   phase: Pending
I0812 13:26:07.459]   qosClass: Guaranteed
I0812 13:26:07.459] ---
I0812 13:26:07.460] has:name: valid-pod
I0812 13:26:07.537] Successful
I0812 13:26:07.537] message:Error from server (NotFound): pods "invalid-pod" not found
I0812 13:26:07.538] has:"invalid-pod" not found
I0812 13:26:07.631] pod "valid-pod" deleted
I0812 13:26:07.743] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:26:07.917] (Bpod/redis-master created
I0812 13:26:07.923] pod/valid-pod created
I0812 13:26:08.034] Successful
... skipping 35 lines ...
I0812 13:26:09.325] +++ command: run_kubectl_exec_pod_tests
I0812 13:26:09.338] +++ [0812 13:26:09] Creating namespace namespace-1565616369-29186
I0812 13:26:09.423] namespace/namespace-1565616369-29186 created
I0812 13:26:09.507] Context "test" modified.
I0812 13:26:09.515] +++ [0812 13:26:09] Testing kubectl exec POD COMMAND
I0812 13:26:09.610] Successful
I0812 13:26:09.611] message:Error from server (NotFound): pods "abc" not found
I0812 13:26:09.611] has:pods "abc" not found
I0812 13:26:09.786] pod/test-pod created
I0812 13:26:09.902] Successful
I0812 13:26:09.903] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0812 13:26:09.904] has not:pods "test-pod" not found
I0812 13:26:09.905] Successful
I0812 13:26:09.905] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0812 13:26:09.905] has not:pod or type/name must be specified
I0812 13:26:10.001] pod "test-pod" deleted
I0812 13:26:10.026] +++ exit code: 0
I0812 13:26:10.071] Recording: run_kubectl_exec_resource_name_tests
I0812 13:26:10.072] Running command: run_kubectl_exec_resource_name_tests
I0812 13:26:10.097] 
... skipping 2 lines ...
I0812 13:26:10.106] +++ command: run_kubectl_exec_resource_name_tests
I0812 13:26:10.126] +++ [0812 13:26:10] Creating namespace namespace-1565616370-19250
I0812 13:26:10.219] namespace/namespace-1565616370-19250 created
I0812 13:26:10.299] Context "test" modified.
I0812 13:26:10.307] +++ [0812 13:26:10] Testing kubectl exec TYPE/NAME COMMAND
I0812 13:26:10.416] Successful
I0812 13:26:10.417] message:error: the server doesn't have a resource type "foo"
I0812 13:26:10.418] has:error:
I0812 13:26:10.508] Successful
I0812 13:26:10.509] message:Error from server (NotFound): deployments.apps "bar" not found
I0812 13:26:10.510] has:"bar" not found
I0812 13:26:10.678] pod/test-pod created
I0812 13:26:10.857] replicaset.apps/frontend created
W0812 13:26:10.958] I0812 13:26:10.864821   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616370-19250", Name:"frontend", UID:"c0688722-96d9-41f7-bd90-077e020d7cf5", APIVersion:"apps/v1", ResourceVersion:"746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xjxc6
W0812 13:26:10.959] I0812 13:26:10.869206   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616370-19250", Name:"frontend", UID:"c0688722-96d9-41f7-bd90-077e020d7cf5", APIVersion:"apps/v1", ResourceVersion:"746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jdpwt
W0812 13:26:10.959] I0812 13:26:10.869988   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616370-19250", Name:"frontend", UID:"c0688722-96d9-41f7-bd90-077e020d7cf5", APIVersion:"apps/v1", ResourceVersion:"746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-btdhs
I0812 13:26:11.060] configmap/test-set-env-config created
I0812 13:26:11.150] Successful
I0812 13:26:11.151] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0812 13:26:11.151] has:not implemented
I0812 13:26:11.249] Successful
I0812 13:26:11.249] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0812 13:26:11.249] has not:not found
I0812 13:26:11.252] Successful
I0812 13:26:11.252] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0812 13:26:11.252] has not:pod or type/name must be specified
I0812 13:26:11.366] Successful
I0812 13:26:11.367] message:Error from server (BadRequest): pod frontend-btdhs does not have a host assigned
I0812 13:26:11.367] has not:not found
I0812 13:26:11.369] Successful
I0812 13:26:11.369] message:Error from server (BadRequest): pod frontend-btdhs does not have a host assigned
I0812 13:26:11.370] has not:pod or type/name must be specified
I0812 13:26:11.461] pod "test-pod" deleted
I0812 13:26:11.554] replicaset.apps "frontend" deleted
I0812 13:26:11.649] configmap "test-set-env-config" deleted
I0812 13:26:11.674] +++ exit code: 0
I0812 13:26:11.718] Recording: run_create_secret_tests
I0812 13:26:11.718] Running command: run_create_secret_tests
I0812 13:26:11.744] 
I0812 13:26:11.746] +++ Running case: test-cmd.run_create_secret_tests 
I0812 13:26:11.750] +++ working dir: /go/src/k8s.io/kubernetes
I0812 13:26:11.753] +++ command: run_create_secret_tests
I0812 13:26:11.857] Successful
I0812 13:26:11.858] message:Error from server (NotFound): secrets "mysecret" not found
I0812 13:26:11.858] has:secrets "mysecret" not found
I0812 13:26:12.039] Successful
I0812 13:26:12.039] message:Error from server (NotFound): secrets "mysecret" not found
I0812 13:26:12.039] has:secrets "mysecret" not found
I0812 13:26:12.041] Successful
I0812 13:26:12.042] message:user-specified
I0812 13:26:12.042] has:user-specified
I0812 13:26:12.125] Successful
I0812 13:26:12.212] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"7d4ba9f5-bbcb-4f78-82ac-2eb52e402ed3","resourceVersion":"767","creationTimestamp":"2019-08-12T13:26:12Z"}}
... skipping 2 lines ...
I0812 13:26:12.421] has:uid
I0812 13:26:12.509] Successful
I0812 13:26:12.510] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"7d4ba9f5-bbcb-4f78-82ac-2eb52e402ed3","resourceVersion":"768","creationTimestamp":"2019-08-12T13:26:12Z","managedFields":[{"manager":"kubectl","operation":"Update","apiVersion":"v1","time":"2019-08-12T13:26:12Z","fields":{"f:data":{"f:key1":{},".":{}}}}]},"data":{"key1":"config1"}}
I0812 13:26:12.510] has:config1
I0812 13:26:12.590] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"7d4ba9f5-bbcb-4f78-82ac-2eb52e402ed3"}}
I0812 13:26:12.703] Successful
I0812 13:26:12.703] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0812 13:26:12.704] has:configmaps "tester-update-cm" not found
I0812 13:26:12.715] +++ exit code: 0
I0812 13:26:12.751] Recording: run_kubectl_create_kustomization_directory_tests
I0812 13:26:12.752] Running command: run_kubectl_create_kustomization_directory_tests
I0812 13:26:12.774] 
I0812 13:26:12.777] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 158 lines ...
I0812 13:26:15.729] valid-pod   0/1     Pending   0          0s
I0812 13:26:15.729] has:valid-pod
I0812 13:26:16.823] Successful
I0812 13:26:16.823] message:NAME        READY   STATUS    RESTARTS   AGE
I0812 13:26:16.823] valid-pod   0/1     Pending   0          0s
I0812 13:26:16.823] STATUS      REASON          MESSAGE
I0812 13:26:16.824] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0812 13:26:16.824] has:Timeout exceeded while reading body
I0812 13:26:16.921] Successful
I0812 13:26:16.921] message:NAME        READY   STATUS    RESTARTS   AGE
I0812 13:26:16.921] valid-pod   0/1     Pending   0          1s
I0812 13:26:16.921] has:valid-pod
I0812 13:26:17.010] Successful
I0812 13:26:17.010] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0812 13:26:17.010] has:Invalid timeout value
I0812 13:26:17.094] pod "valid-pod" deleted
I0812 13:26:17.117] +++ exit code: 0
I0812 13:26:17.160] Recording: run_crd_tests
I0812 13:26:17.161] Running command: run_crd_tests
I0812 13:26:17.187] 
... skipping 245 lines ...
I0812 13:26:22.313] foo.company.com/test patched
I0812 13:26:22.420] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0812 13:26:22.516] (Bfoo.company.com/test patched
I0812 13:26:22.618] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0812 13:26:22.717] (Bfoo.company.com/test patched
I0812 13:26:22.825] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0812 13:26:22.999] (B+++ [0812 13:26:22] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0812 13:26:23.071] {
I0812 13:26:23.072]     "apiVersion": "company.com/v1",
I0812 13:26:23.072]     "kind": "Foo",
I0812 13:26:23.072]     "metadata": {
I0812 13:26:23.072]         "annotations": {
I0812 13:26:23.072]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 352 lines ...
I0812 13:26:40.993] (Bnamespace/non-native-resources created
I0812 13:26:41.196] bar.company.com/test created
I0812 13:26:41.306] crd.sh:455: Successful get bars {{len .items}}: 1
I0812 13:26:41.395] (Bnamespace "non-native-resources" deleted
I0812 13:26:46.668] crd.sh:458: Successful get bars {{len .items}}: 0
I0812 13:26:46.861] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0812 13:26:46.962] Error from server (NotFound): namespaces "non-native-resources" not found
I0812 13:26:47.063] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0812 13:26:47.115] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0812 13:26:47.238] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0812 13:26:47.271] +++ exit code: 0
I0812 13:26:47.312] Recording: run_cmd_with_img_tests
I0812 13:26:47.313] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0812 13:26:47.668] I0812 13:26:47.667232   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616407-13692", Name:"test1-9797f89d8", UID:"1198e05b-255f-4b70-adf5-8ac76f186ca6", APIVersion:"apps/v1", ResourceVersion:"921", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-9797f89d8-jb6sw
I0812 13:26:47.769] Successful
I0812 13:26:47.770] message:deployment.apps/test1 created
I0812 13:26:47.770] has:deployment.apps/test1 created
I0812 13:26:47.771] deployment.apps "test1" deleted
I0812 13:26:47.847] Successful
I0812 13:26:47.848] message:error: Invalid image name "InvalidImageName": invalid reference format
I0812 13:26:47.848] has:error: Invalid image name "InvalidImageName": invalid reference format
I0812 13:26:47.863] +++ exit code: 0
I0812 13:26:47.908] +++ [0812 13:26:47] Testing recursive resources
I0812 13:26:47.916] +++ [0812 13:26:47] Creating namespace namespace-1565616407-326
I0812 13:26:48.002] namespace/namespace-1565616407-326 created
I0812 13:26:48.089] Context "test" modified.
W0812 13:26:48.190] W0812 13:26:47.879822   49611 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0812 13:26:48.191] E0812 13:26:47.881559   53084 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:48.191] W0812 13:26:47.992963   49611 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0812 13:26:48.191] E0812 13:26:47.995111   53084 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:48.192] W0812 13:26:48.131670   49611 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0812 13:26:48.192] E0812 13:26:48.133844   53084 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:48.251] W0812 13:26:48.250918   49611 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0812 13:26:48.253] E0812 13:26:48.252653   53084 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 13:26:48.354] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:26:48.545] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:48.547] (BSuccessful
I0812 13:26:48.548] message:pod/busybox0 created
I0812 13:26:48.548] pod/busybox1 created
I0812 13:26:48.549] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0812 13:26:48.549] has:error validating data: kind not set
I0812 13:26:48.651] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:48.854] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0812 13:26:48.857] (BSuccessful
I0812 13:26:48.858] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 13:26:48.858] has:Object 'Kind' is missing
W0812 13:26:48.959] E0812 13:26:48.883248   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:48.997] E0812 13:26:48.997037   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 13:26:49.098] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:49.289] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0812 13:26:49.293] (BSuccessful
I0812 13:26:49.294] message:pod/busybox0 replaced
I0812 13:26:49.295] pod/busybox1 replaced
I0812 13:26:49.295] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0812 13:26:49.296] has:error validating data: kind not set
W0812 13:26:49.397] E0812 13:26:49.135883   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:49.398] E0812 13:26:49.254277   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 13:26:49.498] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:49.524] (BSuccessful
I0812 13:26:49.525] message:Name:         busybox0
I0812 13:26:49.525] Namespace:    namespace-1565616407-326
I0812 13:26:49.525] Priority:     0
I0812 13:26:49.525] Node:         <none>
... skipping 159 lines ...
I0812 13:26:49.553] has:Object 'Kind' is missing
I0812 13:26:49.639] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:49.852] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0812 13:26:49.855] (BSuccessful
I0812 13:26:49.855] message:pod/busybox0 annotated
I0812 13:26:49.856] pod/busybox1 annotated
I0812 13:26:49.856] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 13:26:49.856] has:Object 'Kind' is missing
I0812 13:26:49.959] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:50.334] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0812 13:26:50.339] (BSuccessful
I0812 13:26:50.340] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0812 13:26:50.340] pod/busybox0 configured
I0812 13:26:50.340] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0812 13:26:50.341] pod/busybox1 configured
I0812 13:26:50.341] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0812 13:26:50.341] has:error validating data: kind not set
I0812 13:26:50.441] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:26:50.616] (Bdeployment.apps/nginx created
W0812 13:26:50.716] E0812 13:26:49.884976   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:50.717] E0812 13:26:49.999070   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:50.717] E0812 13:26:50.138290   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:50.718] E0812 13:26:50.256128   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:50.718] I0812 13:26:50.621727   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565616407-326", Name:"nginx", UID:"31dbd81e-1dd6-4a35-81c9-dfdef0041bc1", APIVersion:"apps/v1", ResourceVersion:"945", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0812 13:26:50.718] I0812 13:26:50.626458   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616407-326", Name:"nginx-bbbbb95b5", UID:"0713cf77-8b32-4341-9645-f943460a8a20", APIVersion:"apps/v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-gz9bl
W0812 13:26:50.718] I0812 13:26:50.630631   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616407-326", Name:"nginx-bbbbb95b5", UID:"0713cf77-8b32-4341-9645-f943460a8a20", APIVersion:"apps/v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-wkvhq
W0812 13:26:50.719] I0812 13:26:50.631557   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616407-326", Name:"nginx-bbbbb95b5", UID:"0713cf77-8b32-4341-9645-f943460a8a20", APIVersion:"apps/v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-2rlqv
I0812 13:26:50.819] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0812 13:26:50.832] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 39 lines ...
I0812 13:26:51.035]       schedulerName: default-scheduler
I0812 13:26:51.035]       securityContext: {}
I0812 13:26:51.035]       terminationGracePeriodSeconds: 30
I0812 13:26:51.035] status: {}
I0812 13:26:51.035] has:extensions/v1beta1
I0812 13:26:51.121] deployment.apps "nginx" deleted
W0812 13:26:51.222] E0812 13:26:50.886526   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:51.222] kubectl convert is DEPRECATED and will be removed in a future version.
W0812 13:26:51.223] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0812 13:26:51.223] E0812 13:26:51.001364   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:51.223] E0812 13:26:51.139414   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:51.259] E0812 13:26:51.258143   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 13:26:51.359] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:51.420] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:51.423] (BSuccessful
I0812 13:26:51.423] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0812 13:26:51.423] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0812 13:26:51.423] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 13:26:51.424] has:Object 'Kind' is missing
I0812 13:26:51.524] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:51.621] (BSuccessful
I0812 13:26:51.622] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 13:26:51.622] has:busybox0:busybox1:
I0812 13:26:51.623] Successful
I0812 13:26:51.623] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 13:26:51.624] has:Object 'Kind' is missing
I0812 13:26:51.726] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:51.833] (Bpod/busybox0 labeled
I0812 13:26:51.833] pod/busybox1 labeled
I0812 13:26:51.834] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 13:26:51.934] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0812 13:26:51.936] (BSuccessful
I0812 13:26:51.936] message:pod/busybox0 labeled
I0812 13:26:51.937] pod/busybox1 labeled
I0812 13:26:51.937] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 13:26:51.937] has:Object 'Kind' is missing
I0812 13:26:52.042] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:52.151] (Bpod/busybox0 patched
I0812 13:26:52.152] pod/busybox1 patched
I0812 13:26:52.153] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 13:26:52.252] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0812 13:26:52.254] (BSuccessful
I0812 13:26:52.255] message:pod/busybox0 patched
I0812 13:26:52.255] pod/busybox1 patched
I0812 13:26:52.255] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 13:26:52.255] has:Object 'Kind' is missing
I0812 13:26:52.356] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:52.560] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:26:52.562] (BSuccessful
I0812 13:26:52.562] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0812 13:26:52.563] pod "busybox0" force deleted
I0812 13:26:52.563] pod "busybox1" force deleted
I0812 13:26:52.563] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0812 13:26:52.563] has:Object 'Kind' is missing
I0812 13:26:52.665] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:26:52.846] (Breplicationcontroller/busybox0 created
I0812 13:26:52.853] replicationcontroller/busybox1 created
W0812 13:26:52.953] E0812 13:26:51.888605   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:52.954] E0812 13:26:52.003109   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:52.954] E0812 13:26:52.142091   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:52.954] E0812 13:26:52.259985   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:52.954] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0812 13:26:52.955] I0812 13:26:52.852926   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565616407-326", Name:"busybox0", UID:"860dbeaa-f8c1-4835-a815-58ea7c3a5541", APIVersion:"v1", ResourceVersion:"977", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-cf9bb
W0812 13:26:52.955] I0812 13:26:52.860147   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565616407-326", Name:"busybox1", UID:"9c1a0eb3-2b7a-4ac6-94af-e7f6cb1920e0", APIVersion:"v1", ResourceVersion:"979", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-t8vhm
W0812 13:26:52.955] E0812 13:26:52.890281   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:53.005] E0812 13:26:53.004658   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 13:26:53.106] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:53.107] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:53.174] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0812 13:26:53.273] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0812 13:26:53.476] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0812 13:26:53.579] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0812 13:26:53.582] (BSuccessful
I0812 13:26:53.583] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0812 13:26:53.583] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0812 13:26:53.584] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 13:26:53.584] has:Object 'Kind' is missing
I0812 13:26:53.673] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0812 13:26:53.769] horizontalpodautoscaler.autoscaling "busybox1" deleted
W0812 13:26:53.870] E0812 13:26:53.143757   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:53.871] E0812 13:26:53.261465   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:53.893] E0812 13:26:53.892168   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 13:26:53.994] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:53.995] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0812 13:26:54.081] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0812 13:26:54.312] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0812 13:26:54.415] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0812 13:26:54.417] (BSuccessful
I0812 13:26:54.417] message:service/busybox0 exposed
I0812 13:26:54.417] service/busybox1 exposed
I0812 13:26:54.418] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 13:26:54.418] has:Object 'Kind' is missing
W0812 13:26:54.519] E0812 13:26:54.006989   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:54.519] E0812 13:26:54.145819   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:54.519] E0812 13:26:54.263194   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 13:26:54.620] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:54.627] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0812 13:26:54.734] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0812 13:26:54.965] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0812 13:26:55.075] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0812 13:26:55.077] (BSuccessful
I0812 13:26:55.078] message:replicationcontroller/busybox0 scaled
I0812 13:26:55.078] replicationcontroller/busybox1 scaled
I0812 13:26:55.079] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 13:26:55.079] has:Object 'Kind' is missing
W0812 13:26:55.180] I0812 13:26:54.841869   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565616407-326", Name:"busybox0", UID:"860dbeaa-f8c1-4835-a815-58ea7c3a5541", APIVersion:"v1", ResourceVersion:"998", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-5zdpv
W0812 13:26:55.181] I0812 13:26:54.859051   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565616407-326", Name:"busybox1", UID:"9c1a0eb3-2b7a-4ac6-94af-e7f6cb1920e0", APIVersion:"v1", ResourceVersion:"1002", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-8jsfn
W0812 13:26:55.181] E0812 13:26:54.893861   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:55.182] E0812 13:26:55.008311   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:55.182] E0812 13:26:55.147651   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:55.265] E0812 13:26:55.264946   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 13:26:55.366] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:55.404] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:26:55.407] (BSuccessful
I0812 13:26:55.408] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0812 13:26:55.408] replicationcontroller "busybox0" force deleted
I0812 13:26:55.408] replicationcontroller "busybox1" force deleted
I0812 13:26:55.409] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 13:26:55.409] has:Object 'Kind' is missing
I0812 13:26:55.513] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:26:55.708] (Bdeployment.apps/nginx1-deployment created
I0812 13:26:55.715] deployment.apps/nginx0-deployment created
W0812 13:26:55.816] I0812 13:26:55.714454   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565616407-326", Name:"nginx1-deployment", UID:"f8760b0c-df83-44e1-ab75-9398510bf8f1", APIVersion:"apps/v1", ResourceVersion:"1019", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-84f7f49fb7 to 2
W0812 13:26:55.817] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0812 13:26:55.817] I0812 13:26:55.719018   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616407-326", Name:"nginx1-deployment-84f7f49fb7", UID:"ff4c5463-6601-40c1-a5b1-464bc6eba293", APIVersion:"apps/v1", ResourceVersion:"1020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-wt66s
W0812 13:26:55.818] I0812 13:26:55.721219   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565616407-326", Name:"nginx0-deployment", UID:"f006abc8-759c-405f-aa8b-7eea8b1802ff", APIVersion:"apps/v1", ResourceVersion:"1021", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57475bf54d to 2
W0812 13:26:55.819] I0812 13:26:55.726361   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616407-326", Name:"nginx1-deployment-84f7f49fb7", UID:"ff4c5463-6601-40c1-a5b1-464bc6eba293", APIVersion:"apps/v1", ResourceVersion:"1020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-zsvsq
W0812 13:26:55.819] I0812 13:26:55.726406   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616407-326", Name:"nginx0-deployment-57475bf54d", UID:"5f7780e4-d8bc-4e61-ad8f-8b44826d406d", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-tkqk2
W0812 13:26:55.820] I0812 13:26:55.736932   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565616407-326", Name:"nginx0-deployment-57475bf54d", UID:"5f7780e4-d8bc-4e61-ad8f-8b44826d406d", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-dnfwb
W0812 13:26:55.896] E0812 13:26:55.895580   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 13:26:55.997] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0812 13:26:55.998] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0812 13:26:56.220] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0812 13:26:56.223] (BSuccessful
I0812 13:26:56.224] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0812 13:26:56.224] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0812 13:26:56.225] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0812 13:26:56.225] has:Object 'Kind' is missing
I0812 13:26:56.325] deployment.apps/nginx1-deployment paused
I0812 13:26:56.332] deployment.apps/nginx0-deployment paused
W0812 13:26:56.433] E0812 13:26:56.010040   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:56.434] E0812 13:26:56.149503   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:56.434] E0812 13:26:56.266499   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 13:26:56.535] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0812 13:26:56.536] (BSuccessful
I0812 13:26:56.537] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0812 13:26:56.537] has:Object 'Kind' is missing
I0812 13:26:56.562] deployment.apps/nginx1-deployment resumed
I0812 13:26:56.570] deployment.apps/nginx0-deployment resumed
... skipping 7 lines ...
I0812 13:26:56.807] 1         <none>
I0812 13:26:56.807] 
I0812 13:26:56.807] deployment.apps/nginx0-deployment 
I0812 13:26:56.807] REVISION  CHANGE-CAUSE
I0812 13:26:56.808] 1         <none>
I0812 13:26:56.808] 
I0812 13:26:56.809] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0812 13:26:56.809] has:nginx0-deployment
I0812 13:26:56.809] Successful
I0812 13:26:56.809] message:deployment.apps/nginx1-deployment 
I0812 13:26:56.810] REVISION  CHANGE-CAUSE
I0812 13:26:56.810] 1         <none>
I0812 13:26:56.810] 
I0812 13:26:56.811] deployment.apps/nginx0-deployment 
I0812 13:26:56.811] REVISION  CHANGE-CAUSE
I0812 13:26:56.811] 1         <none>
I0812 13:26:56.812] 
I0812 13:26:56.813] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0812 13:26:56.813] has:nginx1-deployment
I0812 13:26:56.813] Successful
I0812 13:26:56.813] message:deployment.apps/nginx1-deployment 
I0812 13:26:56.813] REVISION  CHANGE-CAUSE
I0812 13:26:56.813] 1         <none>
I0812 13:26:56.814] 
I0812 13:26:56.814] deployment.apps/nginx0-deployment 
I0812 13:26:56.814] REVISION  CHANGE-CAUSE
I0812 13:26:56.814] 1         <none>
I0812 13:26:56.814] 
I0812 13:26:56.815] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0812 13:26:56.815] has:Object 'Kind' is missing
I0812 13:26:56.902] deployment.apps "nginx1-deployment" force deleted
I0812 13:26:56.908] deployment.apps "nginx0-deployment" force deleted
W0812 13:26:57.009] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0812 13:26:57.010] E0812 13:26:56.898023   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:57.010] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0812 13:26:57.012] E0812 13:26:57.011988   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:57.152] E0812 13:26:57.151396   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:57.269] E0812 13:26:57.268570   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:57.900] E0812 13:26:57.899968   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:58.014] E0812 13:26:58.013587   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 13:26:58.115] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0812 13:26:58.218] (Breplicationcontroller/busybox0 created
I0812 13:26:58.224] replicationcontroller/busybox1 created
W0812 13:26:58.324] E0812 13:26:58.153157   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0812 13:26:58.325] I0812 13:26:58.221944   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565616407-326", Name:"busybox0", UID:"0c713354-b570-4e53-96b0-16e3210138b4", APIVersion:"v1", ResourceVersion:"1068", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-st476
W0812 13:26:58.325] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0812 13:26:58.326] I0812 13:26:58.235220   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565616407-326", Name:"busybox1", UID:"761c1df9-55a0-4bea-aec0-ba83d8d75572", APIVersion:"v1", ResourceVersion:"1070", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-8n92n
W0812 13:26:58.327] E0812 13:26:58.270435   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0812 13:26:58.427] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0812 13:26:58.457] (BSuccessful
I0812 13:26:58.458] message:no rollbacker has been implemented for "ReplicationController"
I0812 13:26:58.458] no rollbacker has been implemented for "ReplicationController"
I0812 13:26:58.458] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 13:26:58.459] has:no rollbacker has been implemented for "ReplicationController"
I0812 13:26:58.460] Successful
I0812 13:26:58.460] message:no rollbacker has been implemented for "ReplicationController"
I0812 13:26:58.461] no rollbacker has been implemented for "ReplicationController"
I0812 13:26:58.461] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 13:26:58.462] has:Object 'Kind' is missing
I0812 13:26:58.567] Successful
I0812 13:26:58.569] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0812 13:26:58.569] error: replicationcontrollers "busybox0" pausing is not supported
I0812 13:26:58.569] error: replicationcontrollers "busybox1" pausing is not supported
I0812 13:26:58.569] has:Object 'Kind' is